Jul 10 00:30:10.987897 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:30:10.987919 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jul 9 22:54:34 -00 2025 Jul 10 00:30:10.987929 kernel: KASLR enabled Jul 10 00:30:10.987935 kernel: efi: EFI v2.7 by EDK II Jul 10 00:30:10.987940 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 10 00:30:10.987946 kernel: random: crng init done Jul 10 00:30:10.987953 kernel: ACPI: Early table checksum verification disabled Jul 10 00:30:10.987959 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 10 00:30:10.987965 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:30:10.987972 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:30:10.987978 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:30:10.987984 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:30:10.987990 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:30:10.987997 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:30:10.988004 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:30:10.988012 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:30:10.988018 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:30:10.988025 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:30:10.988031 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:30:10.988037 kernel: NUMA: Failed to initialise from firmware Jul 10 00:30:10.988056 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:30:10.988063 kernel: NUMA: NODE_DATA [mem 0xdc95a800-0xdc95ffff] Jul 10 00:30:10.988069 kernel: Zone ranges: Jul 10 00:30:10.988075 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:30:10.988082 kernel: DMA32 empty Jul 10 00:30:10.988089 kernel: Normal empty Jul 10 00:30:10.988096 kernel: Movable zone start for each node Jul 10 00:30:10.988102 kernel: Early memory node ranges Jul 10 00:30:10.988108 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 10 00:30:10.988115 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 10 00:30:10.988121 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 10 00:30:10.988127 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 10 00:30:10.988134 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 10 00:30:10.988140 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 10 00:30:10.988147 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 10 00:30:10.988153 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:30:10.988160 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:30:10.988167 kernel: psci: probing for conduit method from ACPI. Jul 10 00:30:10.988174 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:30:10.988181 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:30:10.988190 kernel: psci: Trusted OS migration not required Jul 10 00:30:10.988196 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:30:10.988203 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:30:10.988211 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 10 00:30:10.988218 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 10 00:30:10.988225 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:30:10.988232 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:30:10.988239 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:30:10.988246 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:30:10.988252 kernel: CPU features: detected: Spectre-v4 Jul 10 00:30:10.988259 kernel: CPU features: detected: Spectre-BHB Jul 10 00:30:10.988266 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:30:10.988273 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:30:10.988281 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:30:10.988288 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:30:10.988294 kernel: alternatives: applying boot alternatives Jul 10 00:30:10.988302 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:30:10.988310 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:30:10.988316 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:30:10.988323 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:30:10.988330 kernel: Fallback order for Node 0: 0 Jul 10 00:30:10.988337 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:30:10.988344 kernel: Policy zone: DMA Jul 10 00:30:10.988351 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:30:10.988364 kernel: software IO TLB: area num 4. Jul 10 00:30:10.988372 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 10 00:30:10.988379 kernel: Memory: 2386412K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185876K reserved, 0K cma-reserved) Jul 10 00:30:10.988386 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:30:10.988393 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:30:10.988400 kernel: rcu: RCU event tracing is enabled. Jul 10 00:30:10.988407 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:30:10.988414 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:30:10.988421 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:30:10.988428 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:30:10.988434 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:30:10.988441 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:30:10.988450 kernel: GICv3: 256 SPIs implemented Jul 10 00:30:10.988457 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:30:10.988464 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:30:10.988471 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 00:30:10.988482 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:30:10.988492 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:30:10.988499 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:30:10.988506 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:30:10.988513 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 10 00:30:10.988521 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 10 00:30:10.988528 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:30:10.988537 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:30:10.988544 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:30:10.988551 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:30:10.988558 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:30:10.988565 kernel: arm-pv: using stolen time PV Jul 10 00:30:10.988573 kernel: Console: colour dummy device 80x25 Jul 10 00:30:10.988580 kernel: ACPI: Core revision 20230628 Jul 10 00:30:10.988588 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:30:10.988595 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:30:10.988602 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 10 00:30:10.988615 kernel: landlock: Up and running. Jul 10 00:30:10.988622 kernel: SELinux: Initializing. Jul 10 00:30:10.988629 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:30:10.988636 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:30:10.988644 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:30:10.988651 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:30:10.988658 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:30:10.988665 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:30:10.988672 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:30:10.988681 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:30:10.988688 kernel: Remapping and enabling EFI services. Jul 10 00:30:10.988694 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:30:10.988701 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:30:10.988709 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:30:10.988716 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 10 00:30:10.988723 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:30:10.988730 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:30:10.988737 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:30:10.988744 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:30:10.988753 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 10 00:30:10.988760 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:30:10.988772 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:30:10.988780 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:30:10.988787 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:30:10.988795 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 10 00:30:10.988802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:30:10.988809 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:30:10.988816 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:30:10.988825 kernel: SMP: Total of 4 processors activated. Jul 10 00:30:10.988832 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:30:10.988840 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:30:10.988847 kernel: CPU features: detected: Common not Private translations Jul 10 00:30:10.988854 kernel: CPU features: detected: CRC32 instructions Jul 10 00:30:10.988862 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 00:30:10.988869 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:30:10.988877 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:30:10.988886 kernel: CPU features: detected: Privileged Access Never Jul 10 00:30:10.988893 kernel: CPU features: detected: RAS Extension Support Jul 10 00:30:10.988901 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:30:10.988910 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:30:10.988918 kernel: alternatives: applying system-wide alternatives Jul 10 00:30:10.988926 kernel: devtmpfs: initialized Jul 10 00:30:10.988935 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:30:10.988944 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:30:10.988954 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:30:10.988963 kernel: SMBIOS 3.0.0 present. Jul 10 00:30:10.988971 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 10 00:30:10.988978 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:30:10.988986 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:30:10.988993 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:30:10.989002 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:30:10.989010 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:30:10.989017 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 10 00:30:10.989025 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:30:10.989034 kernel: cpuidle: using governor menu Jul 10 00:30:10.989137 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:30:10.989146 kernel: ASID allocator initialised with 32768 entries Jul 10 00:30:10.989154 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:30:10.989161 kernel: Serial: AMBA PL011 UART driver Jul 10 00:30:10.989169 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 00:30:10.989176 kernel: Modules: 0 pages in range for non-PLT usage Jul 10 00:30:10.989184 kernel: Modules: 509008 pages in range for PLT usage Jul 10 00:30:10.989191 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:30:10.989201 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:30:10.989209 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:30:10.989216 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 00:30:10.989223 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:30:10.989231 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:30:10.989238 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:30:10.989246 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 00:30:10.989253 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:30:10.989261 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:30:10.989270 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:30:10.989277 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:30:10.989284 kernel: ACPI: Interpreter enabled Jul 10 00:30:10.989292 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:30:10.989299 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:30:10.989307 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:30:10.989314 kernel: printk: console [ttyAMA0] enabled Jul 10 00:30:10.989322 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:30:10.989479 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:30:10.989563 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:30:10.989630 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:30:10.989700 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:30:10.989765 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:30:10.989775 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:30:10.989783 kernel: PCI host bridge to bus 0000:00 Jul 10 00:30:10.989855 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:30:10.989919 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:30:10.989978 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:30:10.990037 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:30:10.990135 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:30:10.990216 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:30:10.990286 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:30:10.990364 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:30:10.990436 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:30:10.990506 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:30:10.990573 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:30:10.990640 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:30:10.990703 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:30:10.990762 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:30:10.990824 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:30:10.990834 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:30:10.990842 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:30:10.990849 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:30:10.990857 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:30:10.990865 kernel: iommu: Default domain type: Translated Jul 10 00:30:10.990872 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:30:10.990880 kernel: efivars: Registered efivars operations Jul 10 00:30:10.990890 kernel: vgaarb: loaded Jul 10 00:30:10.990898 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:30:10.990905 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:30:10.990913 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:30:10.990921 kernel: pnp: PnP ACPI init Jul 10 00:30:10.991000 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:30:10.991010 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:30:10.991018 kernel: NET: Registered PF_INET protocol family Jul 10 00:30:10.991026 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:30:10.991035 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:30:10.991051 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:30:10.991060 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:30:10.991067 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:30:10.991075 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:30:10.991082 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:30:10.991090 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:30:10.991097 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:30:10.991107 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:30:10.991115 kernel: kvm [1]: HYP mode not available Jul 10 00:30:10.991123 kernel: Initialise system trusted keyrings Jul 10 00:30:10.991130 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:30:10.991138 kernel: Key type asymmetric registered Jul 10 00:30:10.991145 kernel: Asymmetric key parser 'x509' registered Jul 10 00:30:10.991153 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:30:10.991160 kernel: io scheduler mq-deadline registered Jul 10 00:30:10.991167 kernel: io scheduler kyber registered Jul 10 00:30:10.991174 kernel: io scheduler bfq registered Jul 10 00:30:10.991183 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:30:10.991191 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:30:10.991198 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:30:10.991268 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:30:10.991279 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:30:10.991286 kernel: thunder_xcv, ver 1.0 Jul 10 00:30:10.991294 kernel: thunder_bgx, ver 1.0 Jul 10 00:30:10.991301 kernel: nicpf, ver 1.0 Jul 10 00:30:10.991308 kernel: nicvf, ver 1.0 Jul 10 00:30:10.991394 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:30:10.991459 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:30:10 UTC (1752107410) Jul 10 00:30:10.991469 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:30:10.991477 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:30:10.991484 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 10 00:30:10.991492 kernel: watchdog: Hard watchdog permanently disabled Jul 10 00:30:10.991499 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:30:10.991507 kernel: Segment Routing with IPv6 Jul 10 00:30:10.991517 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:30:10.991524 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:30:10.991532 kernel: Key type dns_resolver registered Jul 10 00:30:10.991539 kernel: registered taskstats version 1 Jul 10 00:30:10.991547 kernel: Loading compiled-in X.509 certificates Jul 10 00:30:10.991554 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 9cbc45ab00feb4acb0fa362a962909c99fb6ef52' Jul 10 00:30:10.991561 kernel: Key type .fscrypt registered Jul 10 00:30:10.991568 kernel: Key type fscrypt-provisioning registered Jul 10 00:30:10.991576 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:30:10.991585 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:30:10.991592 kernel: ima: No architecture policies found Jul 10 00:30:10.991600 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:30:10.991607 kernel: clk: Disabling unused clocks Jul 10 00:30:10.991614 kernel: Freeing unused kernel memory: 39424K Jul 10 00:30:10.991621 kernel: Run /init as init process Jul 10 00:30:10.991629 kernel: with arguments: Jul 10 00:30:10.991636 kernel: /init Jul 10 00:30:10.991643 kernel: with environment: Jul 10 00:30:10.991652 kernel: HOME=/ Jul 10 00:30:10.991659 kernel: TERM=linux Jul 10 00:30:10.991666 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:30:10.991675 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:30:10.991685 systemd[1]: Detected virtualization kvm. Jul 10 00:30:10.991693 systemd[1]: Detected architecture arm64. Jul 10 00:30:10.991700 systemd[1]: Running in initrd. Jul 10 00:30:10.991709 systemd[1]: No hostname configured, using default hostname. Jul 10 00:30:10.991717 systemd[1]: Hostname set to . Jul 10 00:30:10.991725 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:30:10.991733 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:30:10.991741 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:30:10.991749 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:30:10.991758 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:30:10.991766 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:30:10.991776 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:30:10.991784 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:30:10.991793 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:30:10.991802 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:30:10.991810 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:30:10.991818 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:30:10.991826 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:30:10.991835 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:30:10.991843 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:30:10.991851 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:30:10.991860 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:30:10.991868 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:30:10.991876 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:30:10.991884 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 10 00:30:10.991892 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:30:10.991900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:30:10.991909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:30:10.991917 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:30:10.991925 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:30:10.991933 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:30:10.991941 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:30:10.991949 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:30:10.991957 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:30:10.991965 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:30:10.991974 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:30:10.991982 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:30:10.991990 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:30:10.991998 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:30:10.992024 systemd-journald[238]: Collecting audit messages is disabled. Jul 10 00:30:10.992096 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:30:10.992106 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:30:10.992115 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:30:10.992137 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:30:10.992149 systemd-journald[238]: Journal started Jul 10 00:30:10.992169 systemd-journald[238]: Runtime Journal (/run/log/journal/43df742da1c14a8f91d17971b66bccb3) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:30:10.979116 systemd-modules-load[239]: Inserted module 'overlay' Jul 10 00:30:10.995768 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:30:10.997142 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:30:11.000202 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 10 00:30:11.000869 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:30:11.003266 kernel: Bridge firewalling registered Jul 10 00:30:11.002721 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:30:11.004493 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:30:11.010486 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:30:11.012294 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:30:11.018302 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:30:11.020894 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:30:11.022418 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:30:11.034264 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:30:11.036653 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:30:11.045696 dracut-cmdline[276]: dracut-dracut-053 Jul 10 00:30:11.048342 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:30:11.066494 systemd-resolved[278]: Positive Trust Anchors: Jul 10 00:30:11.066514 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:30:11.066544 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:30:11.071422 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 10 00:30:11.074851 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:30:11.076067 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:30:11.117073 kernel: SCSI subsystem initialized Jul 10 00:30:11.122064 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:30:11.131075 kernel: iscsi: registered transport (tcp) Jul 10 00:30:11.143069 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:30:11.143094 kernel: QLogic iSCSI HBA Driver Jul 10 00:30:11.185946 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:30:11.197212 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:30:11.214054 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:30:11.214118 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:30:11.215674 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 10 00:30:11.261079 kernel: raid6: neonx8 gen() 15782 MB/s Jul 10 00:30:11.278064 kernel: raid6: neonx4 gen() 15651 MB/s Jul 10 00:30:11.295064 kernel: raid6: neonx2 gen() 13243 MB/s Jul 10 00:30:11.312065 kernel: raid6: neonx1 gen() 10469 MB/s Jul 10 00:30:11.329066 kernel: raid6: int64x8 gen() 6955 MB/s Jul 10 00:30:11.346067 kernel: raid6: int64x4 gen() 7344 MB/s Jul 10 00:30:11.363064 kernel: raid6: int64x2 gen() 6124 MB/s Jul 10 00:30:11.380218 kernel: raid6: int64x1 gen() 5047 MB/s Jul 10 00:30:11.380247 kernel: raid6: using algorithm neonx8 gen() 15782 MB/s Jul 10 00:30:11.398156 kernel: raid6: .... xor() 11939 MB/s, rmw enabled Jul 10 00:30:11.398174 kernel: raid6: using neon recovery algorithm Jul 10 00:30:11.404485 kernel: xor: measuring software checksum speed Jul 10 00:30:11.404504 kernel: 8regs : 19807 MB/sec Jul 10 00:30:11.404514 kernel: 32regs : 19664 MB/sec Jul 10 00:30:11.405116 kernel: arm64_neon : 26910 MB/sec Jul 10 00:30:11.405129 kernel: xor: using function: arm64_neon (26910 MB/sec) Jul 10 00:30:11.457069 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:30:11.467793 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:30:11.481234 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:30:11.493172 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 10 00:30:11.496417 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:30:11.499820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:30:11.514990 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Jul 10 00:30:11.542591 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:30:11.550246 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:30:11.590675 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:30:11.599211 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:30:11.614087 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:30:11.615865 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:30:11.617841 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:30:11.620143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:30:11.628217 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:30:11.640937 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:30:11.652257 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 10 00:30:11.652431 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:30:11.653197 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:30:11.653317 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:30:11.660451 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:30:11.662639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:30:11.662779 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:30:11.665104 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:30:11.671575 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:30:11.671604 kernel: GPT:9289727 != 19775487 Jul 10 00:30:11.671614 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:30:11.671631 kernel: GPT:9289727 != 19775487 Jul 10 00:30:11.671640 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:30:11.671650 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:30:11.678263 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:30:11.689927 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:30:11.698561 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:30:11.708084 kernel: BTRFS: device fsid e18a5201-bc0c-484b-ba1b-be3c0a720c32 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (507) Jul 10 00:30:11.711453 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (514) Jul 10 00:30:11.712448 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:30:11.717241 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:30:11.718688 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:30:11.728889 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:30:11.730288 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:30:11.736087 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:30:11.748202 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:30:11.753844 disk-uuid[560]: Primary Header is updated. Jul 10 00:30:11.753844 disk-uuid[560]: Secondary Entries is updated. Jul 10 00:30:11.753844 disk-uuid[560]: Secondary Header is updated. Jul 10 00:30:11.758078 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:30:12.771060 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:30:12.771645 disk-uuid[561]: The operation has completed successfully. Jul 10 00:30:12.810142 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:30:12.810240 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:30:12.834232 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:30:12.838261 sh[576]: Success Jul 10 00:30:12.854402 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:30:12.885589 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:30:12.900463 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:30:12.902851 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:30:12.912960 kernel: BTRFS info (device dm-0): first mount of filesystem e18a5201-bc0c-484b-ba1b-be3c0a720c32 Jul 10 00:30:12.913000 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:30:12.913011 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 10 00:30:12.914062 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 10 00:30:12.915451 kernel: BTRFS info (device dm-0): using free space tree Jul 10 00:30:12.920226 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:30:12.921875 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:30:12.934294 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:30:12.936197 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:30:12.951883 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:30:12.951942 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:30:12.951954 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:30:12.955061 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:30:12.962954 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:30:12.965386 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:30:12.970809 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:30:12.977221 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:30:13.045302 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:30:13.056244 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:30:13.086637 systemd-networkd[760]: lo: Link UP Jul 10 00:30:13.086646 systemd-networkd[760]: lo: Gained carrier Jul 10 00:30:13.087631 systemd-networkd[760]: Enumeration completed Jul 10 00:30:13.087723 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:30:13.088374 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:30:13.088378 systemd-networkd[760]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:30:13.089408 systemd[1]: Reached target network.target - Network. Jul 10 00:30:13.090655 systemd-networkd[760]: eth0: Link UP Jul 10 00:30:13.090659 systemd-networkd[760]: eth0: Gained carrier Jul 10 00:30:13.090667 systemd-networkd[760]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:30:13.108091 systemd-networkd[760]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:30:13.112271 ignition[673]: Ignition 2.19.0 Jul 10 00:30:13.112281 ignition[673]: Stage: fetch-offline Jul 10 00:30:13.112314 ignition[673]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:30:13.112323 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:30:13.112607 ignition[673]: parsed url from cmdline: "" Jul 10 00:30:13.112611 ignition[673]: no config URL provided Jul 10 00:30:13.112615 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:30:13.112622 ignition[673]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:30:13.112643 ignition[673]: op(1): [started] loading QEMU firmware config module Jul 10 00:30:13.112648 ignition[673]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:30:13.118323 ignition[673]: op(1): [finished] loading QEMU firmware config module Jul 10 00:30:13.156847 ignition[673]: parsing config with SHA512: 122ff306722630b4dd9f63031a3e23a45e1251322bb7f6ecdffb266e565945d3b7991b931f2b70ec110f2e78223d169c95d15d26df7861d71fcb3ca4804391d2 Jul 10 00:30:13.162584 unknown[673]: fetched base config from "system" Jul 10 00:30:13.162593 unknown[673]: fetched user config from "qemu" Jul 10 00:30:13.162999 ignition[673]: fetch-offline: fetch-offline passed Jul 10 00:30:13.164924 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:30:13.163085 ignition[673]: Ignition finished successfully Jul 10 00:30:13.166525 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:30:13.178204 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:30:13.189084 ignition[772]: Ignition 2.19.0 Jul 10 00:30:13.189094 ignition[772]: Stage: kargs Jul 10 00:30:13.189257 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:30:13.189266 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:30:13.190160 ignition[772]: kargs: kargs passed Jul 10 00:30:13.192790 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:30:13.190207 ignition[772]: Ignition finished successfully Jul 10 00:30:13.205185 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:30:13.214600 ignition[781]: Ignition 2.19.0 Jul 10 00:30:13.214611 ignition[781]: Stage: disks Jul 10 00:30:13.214771 ignition[781]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:30:13.217539 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:30:13.214780 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:30:13.218740 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:30:13.215629 ignition[781]: disks: disks passed Jul 10 00:30:13.220409 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:30:13.215675 ignition[781]: Ignition finished successfully Jul 10 00:30:13.222329 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:30:13.224065 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:30:13.225477 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:30:13.246617 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:30:13.260571 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 10 00:30:13.268214 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:30:13.277183 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:30:13.319068 kernel: EXT4-fs (vda9): mounted filesystem c566fdd5-af6f-4008-858c-a2aed765f9b4 r/w with ordered data mode. Quota mode: none. Jul 10 00:30:13.319533 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:30:13.320776 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:30:13.329155 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:30:13.330936 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:30:13.331884 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:30:13.331922 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:30:13.331942 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:30:13.338515 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:30:13.341473 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (799) Jul 10 00:30:13.341713 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:30:13.346012 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:30:13.346033 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:30:13.346057 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:30:13.348213 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:30:13.349378 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:30:13.385620 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:30:13.390186 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:30:13.394214 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:30:13.397755 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:30:13.471795 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:30:13.479224 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:30:13.481560 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:30:13.487061 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:30:13.504160 ignition[914]: INFO : Ignition 2.19.0 Jul 10 00:30:13.504160 ignition[914]: INFO : Stage: mount Jul 10 00:30:13.505937 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:30:13.505937 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:30:13.505937 ignition[914]: INFO : mount: mount passed Jul 10 00:30:13.505937 ignition[914]: INFO : Ignition finished successfully Jul 10 00:30:13.506573 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:30:13.509788 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:30:13.515158 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:30:13.911727 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:30:13.921236 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:30:13.932940 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (929) Jul 10 00:30:13.932985 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:30:13.932996 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:30:13.934720 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:30:13.937058 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:30:13.938186 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:30:13.955182 ignition[946]: INFO : Ignition 2.19.0 Jul 10 00:30:13.957406 ignition[946]: INFO : Stage: files Jul 10 00:30:13.957406 ignition[946]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:30:13.957406 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:30:13.957406 ignition[946]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:30:13.961647 ignition[946]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:30:13.962964 ignition[946]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:30:13.966487 ignition[946]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:30:13.967847 ignition[946]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:30:13.967847 ignition[946]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:30:13.967093 unknown[946]: wrote ssh authorized keys file for user: core Jul 10 00:30:13.971533 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 00:30:13.971533 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 10 00:30:14.077620 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 10 00:30:14.592413 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:30:14.594598 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 10 00:30:14.957168 systemd-networkd[760]: eth0: Gained IPv6LL Jul 10 00:30:15.111416 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 10 00:30:15.645062 ignition[946]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 10 00:30:15.645062 ignition[946]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 10 00:30:15.648556 ignition[946]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:30:15.648556 ignition[946]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:30:15.648556 ignition[946]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 10 00:30:15.648556 ignition[946]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 10 00:30:15.648556 ignition[946]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:30:15.648556 ignition[946]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:30:15.648556 ignition[946]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 10 00:30:15.648556 ignition[946]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:30:15.672761 ignition[946]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:30:15.676788 ignition[946]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:30:15.678305 ignition[946]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:30:15.678305 ignition[946]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:30:15.678305 ignition[946]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:30:15.678305 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:30:15.678305 ignition[946]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:30:15.678305 ignition[946]: INFO : files: files passed Jul 10 00:30:15.678305 ignition[946]: INFO : Ignition finished successfully Jul 10 00:30:15.680315 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:30:15.693258 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:30:15.696391 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:30:15.699109 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:30:15.699194 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:30:15.703801 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:30:15.706785 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:30:15.706785 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:30:15.711122 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:30:15.710240 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:30:15.711845 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:30:15.722257 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:30:15.742628 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:30:15.742750 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:30:15.745027 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:30:15.747954 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:30:15.749027 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:30:15.749895 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:30:15.768094 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:30:15.770620 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:30:15.782747 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:30:15.784040 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:30:15.786133 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:30:15.787955 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:30:15.788094 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:30:15.790656 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:30:15.792731 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:30:15.794391 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:30:15.796127 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:30:15.798182 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:30:15.800269 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:30:15.802232 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:30:15.804315 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:30:15.806296 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:30:15.808099 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:30:15.810055 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:30:15.810208 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:30:15.812723 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:30:15.815062 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:30:15.817095 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:30:15.818167 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:30:15.820311 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:30:15.820453 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:30:15.823293 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:30:15.823419 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:30:15.825745 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:30:15.827323 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:30:15.827488 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:30:15.829458 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:30:15.831030 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:30:15.832828 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:30:15.832919 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:30:15.835073 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:30:15.835156 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:30:15.836775 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:30:15.836881 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:30:15.838800 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:30:15.838899 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:30:15.855591 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:30:15.856525 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:30:15.856664 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:30:15.862309 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:30:15.863183 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:30:15.863316 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:30:15.865152 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:30:15.865259 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:30:15.870826 ignition[1000]: INFO : Ignition 2.19.0 Jul 10 00:30:15.870826 ignition[1000]: INFO : Stage: umount Jul 10 00:30:15.870826 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:30:15.870826 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:30:15.870312 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:30:15.881675 ignition[1000]: INFO : umount: umount passed Jul 10 00:30:15.881675 ignition[1000]: INFO : Ignition finished successfully Jul 10 00:30:15.870486 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:30:15.875938 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:30:15.876391 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:30:15.876545 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:30:15.879541 systemd[1]: Stopped target network.target - Network. Jul 10 00:30:15.880730 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:30:15.880787 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:30:15.882619 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:30:15.882669 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:30:15.884393 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:30:15.884438 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:30:15.886351 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:30:15.886400 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:30:15.888539 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:30:15.890143 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:30:15.894375 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:30:15.894482 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:30:15.896824 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:30:15.896872 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:30:15.898093 systemd-networkd[760]: eth0: DHCPv6 lease lost Jul 10 00:30:15.899163 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:30:15.899258 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:30:15.901448 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:30:15.901502 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:30:15.912194 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:30:15.913523 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:30:15.913590 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:30:15.915488 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:30:15.915534 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:30:15.917402 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:30:15.917445 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:30:15.919300 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:30:15.930686 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:30:15.930794 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:30:15.945767 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:30:15.947079 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:30:15.948628 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:30:15.948668 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:30:15.950429 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:30:15.950465 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:30:15.952387 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:30:15.952434 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:30:15.955117 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:30:15.955160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:30:15.957805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:30:15.957847 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:30:15.977236 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:30:15.978322 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:30:15.978401 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:30:15.980632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:30:15.980681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:30:15.982874 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:30:15.983104 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:30:15.984812 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:30:15.984898 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:30:15.987620 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:30:15.989532 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:30:15.989625 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:30:15.992405 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:30:16.005690 systemd[1]: Switching root. Jul 10 00:30:16.037954 systemd-journald[238]: Journal stopped Jul 10 00:30:16.806115 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 10 00:30:16.806163 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:30:16.806178 kernel: SELinux: policy capability open_perms=1 Jul 10 00:30:16.806188 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:30:16.806200 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:30:16.806210 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:30:16.806221 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:30:16.806231 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:30:16.806240 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:30:16.806253 kernel: audit: type=1403 audit(1752107416.189:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:30:16.806264 systemd[1]: Successfully loaded SELinux policy in 32.139ms. Jul 10 00:30:16.806280 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.364ms. Jul 10 00:30:16.806292 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:30:16.806308 systemd[1]: Detected virtualization kvm. Jul 10 00:30:16.806318 systemd[1]: Detected architecture arm64. Jul 10 00:30:16.806328 systemd[1]: Detected first boot. Jul 10 00:30:16.806339 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:30:16.806360 zram_generator::config[1044]: No configuration found. Jul 10 00:30:16.806371 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:30:16.806381 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 10 00:30:16.806392 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 10 00:30:16.806404 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 10 00:30:16.806415 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:30:16.806427 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:30:16.806438 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:30:16.806449 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:30:16.806459 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:30:16.806470 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:30:16.806481 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:30:16.806493 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:30:16.806504 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:30:16.806515 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:30:16.806526 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:30:16.806536 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:30:16.806548 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:30:16.806560 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:30:16.806571 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 00:30:16.806585 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:30:16.806596 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 10 00:30:16.806608 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 10 00:30:16.806619 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 10 00:30:16.806629 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:30:16.806640 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:30:16.806651 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:30:16.806661 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:30:16.806672 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:30:16.806684 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:30:16.806694 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:30:16.806705 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:30:16.806716 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:30:16.806726 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:30:16.806737 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:30:16.806747 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:30:16.806757 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:30:16.806768 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:30:16.806780 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:30:16.806792 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:30:16.806802 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:30:16.806813 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:30:16.806824 systemd[1]: Reached target machines.target - Containers. Jul 10 00:30:16.806834 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:30:16.806845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:30:16.806855 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:30:16.806866 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:30:16.806878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:30:16.806889 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:30:16.806900 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:30:16.806910 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:30:16.806921 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:30:16.806932 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:30:16.806943 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 10 00:30:16.806953 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 10 00:30:16.806966 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 10 00:30:16.806976 kernel: fuse: init (API version 7.39) Jul 10 00:30:16.806986 systemd[1]: Stopped systemd-fsck-usr.service. Jul 10 00:30:16.806996 kernel: loop: module loaded Jul 10 00:30:16.807006 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:30:16.807017 kernel: ACPI: bus type drm_connector registered Jul 10 00:30:16.807027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:30:16.807038 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:30:16.807065 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:30:16.807079 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:30:16.807090 systemd[1]: verity-setup.service: Deactivated successfully. Jul 10 00:30:16.807100 systemd[1]: Stopped verity-setup.service. Jul 10 00:30:16.807111 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:30:16.807121 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:30:16.807131 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:30:16.807158 systemd-journald[1125]: Collecting audit messages is disabled. Jul 10 00:30:16.807180 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:30:16.807192 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:30:16.807204 systemd-journald[1125]: Journal started Jul 10 00:30:16.807225 systemd-journald[1125]: Runtime Journal (/run/log/journal/43df742da1c14a8f91d17971b66bccb3) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:30:16.579407 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:30:16.597605 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:30:16.597962 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 10 00:30:16.808847 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:30:16.810718 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:30:16.813082 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:30:16.814481 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:30:16.815929 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:30:16.816081 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:30:16.817427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:30:16.817552 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:30:16.818873 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:30:16.819003 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:30:16.820311 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:30:16.820461 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:30:16.821841 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:30:16.821970 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:30:16.824396 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:30:16.824524 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:30:16.825801 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:30:16.827390 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:30:16.830075 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:30:16.841778 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:30:16.853142 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:30:16.855082 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:30:16.856125 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:30:16.856162 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:30:16.857968 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 10 00:30:16.860165 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:30:16.862105 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:30:16.863156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:30:16.864860 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:30:16.867214 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:30:16.868555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:30:16.869541 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:30:16.870882 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:30:16.872030 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:30:16.875567 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:30:16.891073 systemd-journald[1125]: Time spent on flushing to /var/log/journal/43df742da1c14a8f91d17971b66bccb3 is 21.722ms for 852 entries. Jul 10 00:30:16.891073 systemd-journald[1125]: System Journal (/var/log/journal/43df742da1c14a8f91d17971b66bccb3) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:30:16.934562 systemd-journald[1125]: Received client request to flush runtime journal. Jul 10 00:30:16.934656 kernel: loop0: detected capacity change from 0 to 211168 Jul 10 00:30:16.880119 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:30:16.882537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:30:16.885357 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:30:16.886736 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:30:16.888168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:30:16.892428 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:30:16.896800 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:30:16.910276 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 10 00:30:16.914264 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 10 00:30:16.916971 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:30:16.927221 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 10 00:30:16.936370 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:30:16.945404 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:30:16.948126 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:30:16.948739 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:30:16.960258 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:30:16.961654 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 10 00:30:16.977657 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 10 00:30:16.977675 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jul 10 00:30:16.978059 kernel: loop1: detected capacity change from 0 to 114328 Jul 10 00:30:16.983093 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:30:17.024084 kernel: loop2: detected capacity change from 0 to 114432 Jul 10 00:30:17.055065 kernel: loop3: detected capacity change from 0 to 211168 Jul 10 00:30:17.063106 kernel: loop4: detected capacity change from 0 to 114328 Jul 10 00:30:17.069101 kernel: loop5: detected capacity change from 0 to 114432 Jul 10 00:30:17.072114 (sd-merge)[1180]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:30:17.072541 (sd-merge)[1180]: Merged extensions into '/usr'. Jul 10 00:30:17.077012 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:30:17.077030 systemd[1]: Reloading... Jul 10 00:30:17.136069 zram_generator::config[1207]: No configuration found. Jul 10 00:30:17.167166 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:30:17.231480 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:30:17.267490 systemd[1]: Reloading finished in 190 ms. Jul 10 00:30:17.297412 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:30:17.299080 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:30:17.314261 systemd[1]: Starting ensure-sysext.service... Jul 10 00:30:17.316293 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:30:17.327436 systemd[1]: Reloading requested from client PID 1240 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:30:17.327451 systemd[1]: Reloading... Jul 10 00:30:17.341739 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:30:17.341996 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:30:17.342672 systemd-tmpfiles[1241]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:30:17.342888 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jul 10 00:30:17.342939 systemd-tmpfiles[1241]: ACLs are not supported, ignoring. Jul 10 00:30:17.345236 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:30:17.345248 systemd-tmpfiles[1241]: Skipping /boot Jul 10 00:30:17.351828 systemd-tmpfiles[1241]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:30:17.351845 systemd-tmpfiles[1241]: Skipping /boot Jul 10 00:30:17.378222 zram_generator::config[1265]: No configuration found. Jul 10 00:30:17.459156 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:30:17.494623 systemd[1]: Reloading finished in 166 ms. Jul 10 00:30:17.508917 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:30:17.524114 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:30:17.531773 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:30:17.534138 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:30:17.536286 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:30:17.539356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:30:17.541914 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:30:17.545009 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:30:17.548120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:30:17.552353 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:30:17.555759 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:30:17.559529 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:30:17.560793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:30:17.562965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:30:17.564085 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:30:17.565636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:30:17.565754 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:30:17.568873 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:30:17.568997 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:30:17.571841 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:30:17.574690 systemd-udevd[1310]: Using default interface naming scheme 'v255'. Jul 10 00:30:17.582561 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:30:17.586120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:30:17.593477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:30:17.595621 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:30:17.600362 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:30:17.605856 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:30:17.607066 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:30:17.608776 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:30:17.613381 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:30:17.615765 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:30:17.617930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:30:17.620088 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:30:17.621704 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:30:17.621825 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:30:17.626392 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:30:17.626810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:30:17.629162 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:30:17.630814 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:30:17.631094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:30:17.635316 systemd[1]: Finished ensure-sysext.service. Jul 10 00:30:17.638303 augenrules[1358]: No rules Jul 10 00:30:17.641281 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:30:17.642774 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:30:17.663260 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:30:17.666175 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:30:17.666246 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:30:17.669908 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1346) Jul 10 00:30:17.669210 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:30:17.671196 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:30:17.671382 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:30:17.675006 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 10 00:30:17.728259 systemd-networkd[1371]: lo: Link UP Jul 10 00:30:17.728267 systemd-networkd[1371]: lo: Gained carrier Jul 10 00:30:17.728958 systemd-networkd[1371]: Enumeration completed Jul 10 00:30:17.729074 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:30:17.739242 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:30:17.739250 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:30:17.739990 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:30:17.740021 systemd-networkd[1371]: eth0: Link UP Jul 10 00:30:17.740025 systemd-networkd[1371]: eth0: Gained carrier Jul 10 00:30:17.740033 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:30:17.743287 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:30:17.747328 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:30:17.753308 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:30:17.767773 systemd-networkd[1371]: eth0: DHCPv4 address 10.0.0.74/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:30:17.767998 systemd-resolved[1308]: Positive Trust Anchors: Jul 10 00:30:17.768268 systemd-resolved[1308]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:30:17.768364 systemd-resolved[1308]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:30:17.774586 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:30:17.775523 systemd-timesyncd[1376]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:30:17.775577 systemd-timesyncd[1376]: Initial clock synchronization to Thu 2025-07-10 00:30:17.911440 UTC. Jul 10 00:30:17.776275 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:30:17.780623 systemd-resolved[1308]: Defaulting to hostname 'linux'. Jul 10 00:30:17.786770 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:30:17.789536 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:30:17.794556 systemd[1]: Reached target network.target - Network. Jul 10 00:30:17.798972 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:30:17.819297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:30:17.824872 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 10 00:30:17.827526 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 10 00:30:17.862439 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:30:17.863613 lvm[1396]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:30:17.896710 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 10 00:30:17.898281 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:30:17.899536 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:30:17.900737 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:30:17.901992 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:30:17.903444 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:30:17.904591 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:30:17.905821 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:30:17.907059 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:30:17.907104 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:30:17.907960 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:30:17.910019 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:30:17.912416 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:30:17.923802 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:30:17.926193 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 10 00:30:17.927795 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:30:17.928985 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:30:17.929934 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:30:17.930910 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:30:17.930945 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:30:17.931787 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:30:17.935083 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:30:17.933719 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:30:17.936696 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:30:17.939449 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:30:17.942749 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:30:17.944212 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:30:17.944840 jq[1407]: false Jul 10 00:30:17.946239 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:30:17.948189 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:30:17.952019 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:30:17.958424 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:30:17.960468 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:30:17.960966 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:30:17.961944 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:30:17.964627 extend-filesystems[1408]: Found loop3 Jul 10 00:30:17.964627 extend-filesystems[1408]: Found loop4 Jul 10 00:30:17.964627 extend-filesystems[1408]: Found loop5 Jul 10 00:30:17.964627 extend-filesystems[1408]: Found vda Jul 10 00:30:17.964627 extend-filesystems[1408]: Found vda1 Jul 10 00:30:17.964627 extend-filesystems[1408]: Found vda2 Jul 10 00:30:17.964627 extend-filesystems[1408]: Found vda3 Jul 10 00:30:17.964627 extend-filesystems[1408]: Found usr Jul 10 00:30:17.964627 extend-filesystems[1408]: Found vda4 Jul 10 00:30:17.964627 extend-filesystems[1408]: Found vda6 Jul 10 00:30:17.964627 extend-filesystems[1408]: Found vda7 Jul 10 00:30:17.964627 extend-filesystems[1408]: Found vda9 Jul 10 00:30:17.964627 extend-filesystems[1408]: Checking size of /dev/vda9 Jul 10 00:30:17.975849 dbus-daemon[1406]: [system] SELinux support is enabled Jul 10 00:30:17.964965 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:30:18.008401 extend-filesystems[1408]: Resized partition /dev/vda9 Jul 10 00:30:17.966734 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 10 00:30:17.971456 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:30:18.009566 jq[1424]: true Jul 10 00:30:17.973913 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:30:17.974241 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:30:18.009822 tar[1427]: linux-arm64/LICENSE Jul 10 00:30:17.974393 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:30:18.010051 jq[1429]: true Jul 10 00:30:17.976358 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:30:17.981604 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:30:17.981780 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:30:17.997545 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:30:17.997573 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:30:17.999518 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:30:17.999539 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:30:18.002243 (ntainerd)[1430]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:30:18.025610 update_engine[1421]: I20250710 00:30:18.024796 1421 main.cc:92] Flatcar Update Engine starting Jul 10 00:30:18.025930 tar[1427]: linux-arm64/helm Jul 10 00:30:18.027201 extend-filesystems[1444]: resize2fs 1.47.1 (20-May-2024) Jul 10 00:30:18.029148 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1337) Jul 10 00:30:18.035194 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:30:18.036860 update_engine[1421]: I20250710 00:30:18.036270 1421 update_check_scheduler.cc:74] Next update check in 10m33s Jul 10 00:30:18.038150 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:30:18.038353 systemd-logind[1417]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:30:18.038799 systemd-logind[1417]: New seat seat0. Jul 10 00:30:18.041537 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:30:18.062516 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:30:18.083085 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:30:18.106731 extend-filesystems[1444]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:30:18.106731 extend-filesystems[1444]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:30:18.106731 extend-filesystems[1444]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:30:18.115938 extend-filesystems[1408]: Resized filesystem in /dev/vda9 Jul 10 00:30:18.117915 bash[1459]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:30:18.109816 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:30:18.109986 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:30:18.119214 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:30:18.121022 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:30:18.128669 locksmithd[1460]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:30:18.248209 containerd[1430]: time="2025-07-10T00:30:18.248126263Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 10 00:30:18.279038 containerd[1430]: time="2025-07-10T00:30:18.278760808Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:30:18.280585 containerd[1430]: time="2025-07-10T00:30:18.280520010Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:30:18.280585 containerd[1430]: time="2025-07-10T00:30:18.280582089Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:30:18.280656 containerd[1430]: time="2025-07-10T00:30:18.280599826Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:30:18.280825 containerd[1430]: time="2025-07-10T00:30:18.280798227Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 10 00:30:18.280884 containerd[1430]: time="2025-07-10T00:30:18.280868523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 10 00:30:18.280966 containerd[1430]: time="2025-07-10T00:30:18.280949519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:30:18.280986 containerd[1430]: time="2025-07-10T00:30:18.280968191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:30:18.281330 containerd[1430]: time="2025-07-10T00:30:18.281238353Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:30:18.281330 containerd[1430]: time="2025-07-10T00:30:18.281312758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:30:18.281383 containerd[1430]: time="2025-07-10T00:30:18.281331471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:30:18.281383 containerd[1430]: time="2025-07-10T00:30:18.281342130Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:30:18.281440 containerd[1430]: time="2025-07-10T00:30:18.281424183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:30:18.281697 containerd[1430]: time="2025-07-10T00:30:18.281676648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:30:18.281874 containerd[1430]: time="2025-07-10T00:30:18.281854139Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:30:18.281899 containerd[1430]: time="2025-07-10T00:30:18.281875008Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:30:18.281967 containerd[1430]: time="2025-07-10T00:30:18.281953603Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:30:18.282083 containerd[1430]: time="2025-07-10T00:30:18.282055590Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:30:18.285440 containerd[1430]: time="2025-07-10T00:30:18.285406471Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:30:18.285488 containerd[1430]: time="2025-07-10T00:30:18.285460210Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:30:18.285488 containerd[1430]: time="2025-07-10T00:30:18.285477052Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 10 00:30:18.285535 containerd[1430]: time="2025-07-10T00:30:18.285502844Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 10 00:30:18.285535 containerd[1430]: time="2025-07-10T00:30:18.285519076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:30:18.285904 containerd[1430]: time="2025-07-10T00:30:18.285870639Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:30:18.286258 containerd[1430]: time="2025-07-10T00:30:18.286236320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286413729Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286435330Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286449202Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286464173Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286477923Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286490127Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286503593Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286517994Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286530239Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286543012Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286556071Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286577388Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286591179Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.286904 containerd[1430]: time="2025-07-10T00:30:18.286603912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286616360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286629296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286643209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286655658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286669042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286681165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286698006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286710740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286722456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286748491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286766350Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286787301Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286799383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.287192 containerd[1430]: time="2025-07-10T00:30:18.286811221Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:30:18.287493 containerd[1430]: time="2025-07-10T00:30:18.287474766Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:30:18.288117 containerd[1430]: time="2025-07-10T00:30:18.287546649Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 10 00:30:18.288117 containerd[1430]: time="2025-07-10T00:30:18.287562474Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:30:18.288117 containerd[1430]: time="2025-07-10T00:30:18.287575655Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 10 00:30:18.288117 containerd[1430]: time="2025-07-10T00:30:18.287587452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.288117 containerd[1430]: time="2025-07-10T00:30:18.287600877Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 10 00:30:18.288117 containerd[1430]: time="2025-07-10T00:30:18.287611251Z" level=info msg="NRI interface is disabled by configuration." Jul 10 00:30:18.288117 containerd[1430]: time="2025-07-10T00:30:18.287622438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:30:18.288293 containerd[1430]: time="2025-07-10T00:30:18.287959804Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:30:18.288293 containerd[1430]: time="2025-07-10T00:30:18.288017042Z" level=info msg="Connect containerd service" Jul 10 00:30:18.288293 containerd[1430]: time="2025-07-10T00:30:18.288043200Z" level=info msg="using legacy CRI server" Jul 10 00:30:18.288293 containerd[1430]: time="2025-07-10T00:30:18.288049505Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:30:18.289683 containerd[1430]: time="2025-07-10T00:30:18.289650825Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:30:18.291049 containerd[1430]: time="2025-07-10T00:30:18.291012576Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:30:18.291718 containerd[1430]: time="2025-07-10T00:30:18.291275537Z" level=info msg="Start subscribing containerd event" Jul 10 00:30:18.291718 containerd[1430]: time="2025-07-10T00:30:18.291325330Z" level=info msg="Start recovering state" Jul 10 00:30:18.291718 containerd[1430]: time="2025-07-10T00:30:18.291386595Z" level=info msg="Start event monitor" Jul 10 00:30:18.291718 containerd[1430]: time="2025-07-10T00:30:18.291395993Z" level=info msg="Start snapshots syncer" Jul 10 00:30:18.291718 containerd[1430]: time="2025-07-10T00:30:18.291404739Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:30:18.291718 containerd[1430]: time="2025-07-10T00:30:18.291412062Z" level=info msg="Start streaming server" Jul 10 00:30:18.291718 containerd[1430]: time="2025-07-10T00:30:18.291693166Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:30:18.291867 containerd[1430]: time="2025-07-10T00:30:18.291745238Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:30:18.291887 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:30:18.293297 containerd[1430]: time="2025-07-10T00:30:18.293271379Z" level=info msg="containerd successfully booted in 0.047777s" Jul 10 00:30:18.429618 tar[1427]: linux-arm64/README.md Jul 10 00:30:18.442407 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:30:18.797910 sshd_keygen[1426]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:30:18.816238 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:30:18.829404 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:30:18.834685 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:30:18.836119 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:30:18.839049 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:30:18.853381 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:30:18.868434 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:30:18.870709 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 00:30:18.872045 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:30:19.758107 systemd-networkd[1371]: eth0: Gained IPv6LL Jul 10 00:30:19.760747 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:30:19.763592 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:30:19.773328 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:30:19.775617 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:19.777792 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:30:19.791958 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:30:19.792316 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:30:19.794564 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:30:19.797532 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:30:20.385486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:20.387115 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:30:20.389760 systemd[1]: Startup finished in 688ms (kernel) + 5.399s (initrd) + 4.239s (userspace) = 10.327s. Jul 10 00:30:20.390274 (kubelet)[1518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:30:20.830741 kubelet[1518]: E0710 00:30:20.830620 1518 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:30:20.833207 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:30:20.833357 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:30:23.770787 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:30:23.771891 systemd[1]: Started sshd@0-10.0.0.74:22-10.0.0.1:33288.service - OpenSSH per-connection server daemon (10.0.0.1:33288). Jul 10 00:30:23.827594 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 33288 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:23.829543 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:23.837904 systemd-logind[1417]: New session 1 of user core. Jul 10 00:30:23.838923 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:30:23.858351 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:30:23.868458 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:30:23.870802 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:30:23.877449 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:30:23.955234 systemd[1535]: Queued start job for default target default.target. Jul 10 00:30:23.966988 systemd[1535]: Created slice app.slice - User Application Slice. Jul 10 00:30:23.967018 systemd[1535]: Reached target paths.target - Paths. Jul 10 00:30:23.967030 systemd[1535]: Reached target timers.target - Timers. Jul 10 00:30:23.968246 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:30:23.978111 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:30:23.978171 systemd[1535]: Reached target sockets.target - Sockets. Jul 10 00:30:23.978183 systemd[1535]: Reached target basic.target - Basic System. Jul 10 00:30:23.978219 systemd[1535]: Reached target default.target - Main User Target. Jul 10 00:30:23.978245 systemd[1535]: Startup finished in 95ms. Jul 10 00:30:23.978515 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:30:23.979766 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:30:24.043820 systemd[1]: Started sshd@1-10.0.0.74:22-10.0.0.1:33300.service - OpenSSH per-connection server daemon (10.0.0.1:33300). Jul 10 00:30:24.085418 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 33300 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:24.086679 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:24.090117 systemd-logind[1417]: New session 2 of user core. Jul 10 00:30:24.103234 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:30:24.156340 sshd[1546]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:24.172636 systemd[1]: sshd@1-10.0.0.74:22-10.0.0.1:33300.service: Deactivated successfully. Jul 10 00:30:24.174270 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:30:24.176618 systemd-logind[1417]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:30:24.176921 systemd[1]: Started sshd@2-10.0.0.74:22-10.0.0.1:33302.service - OpenSSH per-connection server daemon (10.0.0.1:33302). Jul 10 00:30:24.178426 systemd-logind[1417]: Removed session 2. Jul 10 00:30:24.211854 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 33302 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:24.213128 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:24.216703 systemd-logind[1417]: New session 3 of user core. Jul 10 00:30:24.231280 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:30:24.279463 sshd[1553]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:24.290141 systemd[1]: sshd@2-10.0.0.74:22-10.0.0.1:33302.service: Deactivated successfully. Jul 10 00:30:24.291387 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:30:24.293233 systemd-logind[1417]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:30:24.300927 systemd[1]: Started sshd@3-10.0.0.74:22-10.0.0.1:33308.service - OpenSSH per-connection server daemon (10.0.0.1:33308). Jul 10 00:30:24.302112 systemd-logind[1417]: Removed session 3. Jul 10 00:30:24.330335 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 33308 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:24.331629 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:24.335154 systemd-logind[1417]: New session 4 of user core. Jul 10 00:30:24.346236 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:30:24.400362 sshd[1560]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:24.409440 systemd[1]: sshd@3-10.0.0.74:22-10.0.0.1:33308.service: Deactivated successfully. Jul 10 00:30:24.416597 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:30:24.417893 systemd-logind[1417]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:30:24.435445 systemd[1]: Started sshd@4-10.0.0.74:22-10.0.0.1:33318.service - OpenSSH per-connection server daemon (10.0.0.1:33318). Jul 10 00:30:24.436240 systemd-logind[1417]: Removed session 4. Jul 10 00:30:24.466765 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 33318 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:24.468393 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:24.473465 systemd-logind[1417]: New session 5 of user core. Jul 10 00:30:24.484246 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:30:24.544923 sudo[1570]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:30:24.547337 sudo[1570]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:30:24.560404 sudo[1570]: pam_unix(sudo:session): session closed for user root Jul 10 00:30:24.562468 sshd[1567]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:24.571516 systemd[1]: sshd@4-10.0.0.74:22-10.0.0.1:33318.service: Deactivated successfully. Jul 10 00:30:24.572994 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:30:24.578203 systemd-logind[1417]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:30:24.587533 systemd[1]: Started sshd@5-10.0.0.74:22-10.0.0.1:33330.service - OpenSSH per-connection server daemon (10.0.0.1:33330). Jul 10 00:30:24.591773 systemd-logind[1417]: Removed session 5. Jul 10 00:30:24.617899 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 33330 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:24.619160 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:24.623065 systemd-logind[1417]: New session 6 of user core. Jul 10 00:30:24.638286 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:30:24.690424 sudo[1579]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:30:24.690985 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:30:24.693915 sudo[1579]: pam_unix(sudo:session): session closed for user root Jul 10 00:30:24.698640 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 10 00:30:24.698930 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:30:24.715355 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 10 00:30:24.716314 auditctl[1582]: No rules Jul 10 00:30:24.717171 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:30:24.717366 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 10 00:30:24.719004 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:30:24.741594 augenrules[1600]: No rules Jul 10 00:30:24.742831 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:30:24.743807 sudo[1578]: pam_unix(sudo:session): session closed for user root Jul 10 00:30:24.745361 sshd[1575]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:24.758503 systemd[1]: sshd@5-10.0.0.74:22-10.0.0.1:33330.service: Deactivated successfully. Jul 10 00:30:24.760172 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:30:24.762234 systemd-logind[1417]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:30:24.763066 systemd[1]: Started sshd@6-10.0.0.74:22-10.0.0.1:33344.service - OpenSSH per-connection server daemon (10.0.0.1:33344). Jul 10 00:30:24.764264 systemd-logind[1417]: Removed session 6. Jul 10 00:30:24.799909 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 33344 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:24.801188 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:24.804842 systemd-logind[1417]: New session 7 of user core. Jul 10 00:30:24.820230 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:30:24.871519 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:30:24.871790 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:30:25.200385 (dockerd)[1630]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:30:25.200600 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:30:25.487854 dockerd[1630]: time="2025-07-10T00:30:25.487712672Z" level=info msg="Starting up" Jul 10 00:30:25.674777 dockerd[1630]: time="2025-07-10T00:30:25.674735046Z" level=info msg="Loading containers: start." Jul 10 00:30:25.762077 kernel: Initializing XFRM netlink socket Jul 10 00:30:25.833309 systemd-networkd[1371]: docker0: Link UP Jul 10 00:30:25.850383 dockerd[1630]: time="2025-07-10T00:30:25.850342791Z" level=info msg="Loading containers: done." Jul 10 00:30:25.862730 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2043713831-merged.mount: Deactivated successfully. Jul 10 00:30:25.864534 dockerd[1630]: time="2025-07-10T00:30:25.864490850Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:30:25.864610 dockerd[1630]: time="2025-07-10T00:30:25.864594941Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 10 00:30:25.864708 dockerd[1630]: time="2025-07-10T00:30:25.864695287Z" level=info msg="Daemon has completed initialization" Jul 10 00:30:25.900419 dockerd[1630]: time="2025-07-10T00:30:25.900356789Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:30:25.900609 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:30:26.392531 containerd[1430]: time="2025-07-10T00:30:26.392495312Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 10 00:30:27.165759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883397045.mount: Deactivated successfully. Jul 10 00:30:28.306136 containerd[1430]: time="2025-07-10T00:30:28.306080953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:28.307936 containerd[1430]: time="2025-07-10T00:30:28.307822637Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 10 00:30:28.310157 containerd[1430]: time="2025-07-10T00:30:28.310125383Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:28.313285 containerd[1430]: time="2025-07-10T00:30:28.313222991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:28.315563 containerd[1430]: time="2025-07-10T00:30:28.314873911Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.922335121s" Jul 10 00:30:28.315563 containerd[1430]: time="2025-07-10T00:30:28.314913246Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 10 00:30:28.320985 containerd[1430]: time="2025-07-10T00:30:28.320955743Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 10 00:30:29.571468 containerd[1430]: time="2025-07-10T00:30:29.571406953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:29.572063 containerd[1430]: time="2025-07-10T00:30:29.572030908Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 10 00:30:29.572690 containerd[1430]: time="2025-07-10T00:30:29.572665143Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:29.575535 containerd[1430]: time="2025-07-10T00:30:29.575486635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:29.577001 containerd[1430]: time="2025-07-10T00:30:29.576943079Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.255794803s" Jul 10 00:30:29.577001 containerd[1430]: time="2025-07-10T00:30:29.576978939Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 10 00:30:29.577969 containerd[1430]: time="2025-07-10T00:30:29.577815444Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 10 00:30:30.620965 containerd[1430]: time="2025-07-10T00:30:30.620912555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:30.621372 containerd[1430]: time="2025-07-10T00:30:30.621344110Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 10 00:30:30.622318 containerd[1430]: time="2025-07-10T00:30:30.622252775Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:30.625207 containerd[1430]: time="2025-07-10T00:30:30.625147986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:30.626466 containerd[1430]: time="2025-07-10T00:30:30.626344233Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.048497271s" Jul 10 00:30:30.626466 containerd[1430]: time="2025-07-10T00:30:30.626380397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 10 00:30:30.627475 containerd[1430]: time="2025-07-10T00:30:30.627443068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 10 00:30:31.083811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:30:31.093228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:31.196788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:31.200758 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:30:31.233827 kubelet[1847]: E0710 00:30:31.233763 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:30:31.236392 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:30:31.237408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:30:31.681007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount69392429.mount: Deactivated successfully. Jul 10 00:30:32.170526 containerd[1430]: time="2025-07-10T00:30:32.170144070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:32.171322 containerd[1430]: time="2025-07-10T00:30:32.170936745Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 10 00:30:32.172020 containerd[1430]: time="2025-07-10T00:30:32.171914224Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:32.174384 containerd[1430]: time="2025-07-10T00:30:32.174353410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:32.175209 containerd[1430]: time="2025-07-10T00:30:32.174938943Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.547460803s" Jul 10 00:30:32.175209 containerd[1430]: time="2025-07-10T00:30:32.174974476Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 10 00:30:32.175358 containerd[1430]: time="2025-07-10T00:30:32.175336423Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 10 00:30:32.875347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1702699196.mount: Deactivated successfully. Jul 10 00:30:33.778202 containerd[1430]: time="2025-07-10T00:30:33.778150261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:33.779186 containerd[1430]: time="2025-07-10T00:30:33.778893324Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 10 00:30:33.780026 containerd[1430]: time="2025-07-10T00:30:33.779985747Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:33.783459 containerd[1430]: time="2025-07-10T00:30:33.783412840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:33.784764 containerd[1430]: time="2025-07-10T00:30:33.784725969Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.609358347s" Jul 10 00:30:33.784764 containerd[1430]: time="2025-07-10T00:30:33.784763174Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 10 00:30:33.785267 containerd[1430]: time="2025-07-10T00:30:33.785240788Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:30:34.265993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1591864351.mount: Deactivated successfully. Jul 10 00:30:34.273720 containerd[1430]: time="2025-07-10T00:30:34.273670023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:34.276085 containerd[1430]: time="2025-07-10T00:30:34.276050598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 10 00:30:34.278483 containerd[1430]: time="2025-07-10T00:30:34.278418146Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:34.281359 containerd[1430]: time="2025-07-10T00:30:34.281325176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:34.282068 containerd[1430]: time="2025-07-10T00:30:34.282027826Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 496.756529ms" Jul 10 00:30:34.282110 containerd[1430]: time="2025-07-10T00:30:34.282075281Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 00:30:34.282517 containerd[1430]: time="2025-07-10T00:30:34.282498410Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 10 00:30:34.823518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656964481.mount: Deactivated successfully. Jul 10 00:30:36.532030 containerd[1430]: time="2025-07-10T00:30:36.531974774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:36.533071 containerd[1430]: time="2025-07-10T00:30:36.532827764Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 10 00:30:36.535607 containerd[1430]: time="2025-07-10T00:30:36.535564008Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:36.539198 containerd[1430]: time="2025-07-10T00:30:36.539159451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:36.540629 containerd[1430]: time="2025-07-10T00:30:36.540493942Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.257966077s" Jul 10 00:30:36.540629 containerd[1430]: time="2025-07-10T00:30:36.540534524Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 10 00:30:41.487665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:30:41.497218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:41.642640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:41.646527 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:30:41.676919 kubelet[2011]: E0710 00:30:41.676851 2011 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:30:41.679680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:30:41.679926 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:30:41.861005 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:41.877327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:41.905986 systemd[1]: Reloading requested from client PID 2027 ('systemctl') (unit session-7.scope)... Jul 10 00:30:41.906006 systemd[1]: Reloading... Jul 10 00:30:41.986239 zram_generator::config[2066]: No configuration found. Jul 10 00:30:42.316284 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:30:42.371410 systemd[1]: Reloading finished in 465 ms. Jul 10 00:30:42.410716 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:42.413220 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:30:42.413426 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:42.414865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:42.511489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:42.515858 (kubelet)[2113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:30:42.546609 kubelet[2113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:30:42.546609 kubelet[2113]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:30:42.546609 kubelet[2113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:30:42.546958 kubelet[2113]: I0710 00:30:42.546649 2113 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:30:43.518349 kubelet[2113]: I0710 00:30:43.518310 2113 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:30:43.518349 kubelet[2113]: I0710 00:30:43.518344 2113 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:30:43.518573 kubelet[2113]: I0710 00:30:43.518550 2113 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:30:43.546980 kubelet[2113]: E0710 00:30:43.546932 2113 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.74:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 10 00:30:43.549617 kubelet[2113]: I0710 00:30:43.549578 2113 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:30:43.557210 kubelet[2113]: E0710 00:30:43.557176 2113 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:30:43.558069 kubelet[2113]: I0710 00:30:43.557323 2113 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:30:43.560050 kubelet[2113]: I0710 00:30:43.560017 2113 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:30:43.560437 kubelet[2113]: I0710 00:30:43.560398 2113 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:30:43.560575 kubelet[2113]: I0710 00:30:43.560427 2113 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:30:43.560694 kubelet[2113]: I0710 00:30:43.560684 2113 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:30:43.560729 kubelet[2113]: I0710 00:30:43.560697 2113 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:30:43.560983 kubelet[2113]: I0710 00:30:43.560960 2113 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:30:43.564080 kubelet[2113]: I0710 00:30:43.564033 2113 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:30:43.564080 kubelet[2113]: I0710 00:30:43.564079 2113 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:30:43.564191 kubelet[2113]: I0710 00:30:43.564107 2113 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:30:43.565610 kubelet[2113]: I0710 00:30:43.565221 2113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:30:43.566839 kubelet[2113]: I0710 00:30:43.566810 2113 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:30:43.567780 kubelet[2113]: I0710 00:30:43.567751 2113 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:30:43.568131 kubelet[2113]: W0710 00:30:43.568116 2113 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:30:43.569396 kubelet[2113]: E0710 00:30:43.569358 2113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:30:43.569583 kubelet[2113]: E0710 00:30:43.569562 2113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:30:43.571646 kubelet[2113]: I0710 00:30:43.571610 2113 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:30:43.571705 kubelet[2113]: I0710 00:30:43.571654 2113 server.go:1289] "Started kubelet" Jul 10 00:30:43.572254 kubelet[2113]: I0710 00:30:43.571803 2113 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:30:43.584382 kubelet[2113]: I0710 00:30:43.583871 2113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:30:43.584382 kubelet[2113]: I0710 00:30:43.584261 2113 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:30:43.585078 kubelet[2113]: I0710 00:30:43.584979 2113 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:30:43.587778 kubelet[2113]: I0710 00:30:43.587736 2113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:30:43.588021 kubelet[2113]: I0710 00:30:43.588006 2113 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:30:43.588358 kubelet[2113]: E0710 00:30:43.588339 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:43.588640 kubelet[2113]: I0710 00:30:43.588623 2113 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:30:43.590555 kubelet[2113]: E0710 00:30:43.590526 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="200ms" Jul 10 00:30:43.591270 kubelet[2113]: I0710 00:30:43.590783 2113 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:30:43.591270 kubelet[2113]: E0710 00:30:43.588934 2113 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.74:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.74:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bc6fd6679273 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:30:43.571626611 +0000 UTC m=+1.052820471,LastTimestamp:2025-07-10 00:30:43.571626611 +0000 UTC m=+1.052820471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:30:43.591898 kubelet[2113]: I0710 00:30:43.591875 2113 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:30:43.592738 kubelet[2113]: E0710 00:30:43.592706 2113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:30:43.598101 kubelet[2113]: I0710 00:30:43.598080 2113 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:30:43.598530 kubelet[2113]: I0710 00:30:43.598191 2113 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:30:43.598530 kubelet[2113]: I0710 00:30:43.598287 2113 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:30:43.600825 kubelet[2113]: E0710 00:30:43.600786 2113 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:30:43.606844 kubelet[2113]: I0710 00:30:43.606799 2113 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:30:43.607921 kubelet[2113]: I0710 00:30:43.607876 2113 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:30:43.608178 kubelet[2113]: I0710 00:30:43.608155 2113 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:30:43.608214 kubelet[2113]: I0710 00:30:43.608188 2113 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:30:43.608214 kubelet[2113]: I0710 00:30:43.608197 2113 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:30:43.608261 kubelet[2113]: E0710 00:30:43.608243 2113 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:30:43.609197 kubelet[2113]: E0710 00:30:43.609157 2113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:30:43.611400 kubelet[2113]: I0710 00:30:43.611378 2113 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:30:43.611400 kubelet[2113]: I0710 00:30:43.611397 2113 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:30:43.611487 kubelet[2113]: I0710 00:30:43.611415 2113 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:30:43.682731 kubelet[2113]: I0710 00:30:43.682696 2113 policy_none.go:49] "None policy: Start" Jul 10 00:30:43.682731 kubelet[2113]: I0710 00:30:43.682726 2113 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:30:43.682731 kubelet[2113]: I0710 00:30:43.682739 2113 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:30:43.688390 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 10 00:30:43.689448 kubelet[2113]: E0710 00:30:43.688851 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:43.700072 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 10 00:30:43.703156 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 10 00:30:43.708907 kubelet[2113]: E0710 00:30:43.708859 2113 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:30:43.714166 kubelet[2113]: E0710 00:30:43.714120 2113 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:30:43.714661 kubelet[2113]: I0710 00:30:43.714347 2113 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:30:43.714661 kubelet[2113]: I0710 00:30:43.714363 2113 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:30:43.714661 kubelet[2113]: I0710 00:30:43.714588 2113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:30:43.715460 kubelet[2113]: E0710 00:30:43.715439 2113 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:30:43.715592 kubelet[2113]: E0710 00:30:43.715481 2113 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:30:43.792617 kubelet[2113]: E0710 00:30:43.792506 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="400ms" Jul 10 00:30:43.815660 kubelet[2113]: I0710 00:30:43.815606 2113 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:43.816159 kubelet[2113]: E0710 00:30:43.816124 2113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 10 00:30:43.918559 systemd[1]: Created slice kubepods-burstable-pod146aec53a53e1e7b658034475da7f4e1.slice - libcontainer container kubepods-burstable-pod146aec53a53e1e7b658034475da7f4e1.slice. Jul 10 00:30:43.944480 kubelet[2113]: E0710 00:30:43.944428 2113 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:43.947776 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 10 00:30:43.963500 kubelet[2113]: E0710 00:30:43.963320 2113 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:43.965541 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 10 00:30:43.971210 kubelet[2113]: E0710 00:30:43.971174 2113 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:43.993661 kubelet[2113]: I0710 00:30:43.993618 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/146aec53a53e1e7b658034475da7f4e1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"146aec53a53e1e7b658034475da7f4e1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:43.993661 kubelet[2113]: I0710 00:30:43.993661 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/146aec53a53e1e7b658034475da7f4e1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"146aec53a53e1e7b658034475da7f4e1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:43.993795 kubelet[2113]: I0710 00:30:43.993681 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/146aec53a53e1e7b658034475da7f4e1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"146aec53a53e1e7b658034475da7f4e1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:43.993795 kubelet[2113]: I0710 00:30:43.993737 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:43.993795 kubelet[2113]: I0710 00:30:43.993771 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:43.993795 kubelet[2113]: I0710 00:30:43.993792 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:43.993881 kubelet[2113]: I0710 00:30:43.993830 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:43.993881 kubelet[2113]: I0710 00:30:43.993864 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:43.993921 kubelet[2113]: I0710 00:30:43.993883 2113 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:44.017881 kubelet[2113]: I0710 00:30:44.017821 2113 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:44.018275 kubelet[2113]: E0710 00:30:44.018230 2113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 10 00:30:44.193444 kubelet[2113]: E0710 00:30:44.193308 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="800ms" Jul 10 00:30:44.245114 kubelet[2113]: E0710 00:30:44.245062 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:44.245750 containerd[1430]: time="2025-07-10T00:30:44.245697652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:146aec53a53e1e7b658034475da7f4e1,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:44.264782 kubelet[2113]: E0710 00:30:44.264695 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:44.265229 containerd[1430]: time="2025-07-10T00:30:44.265192033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:44.272636 kubelet[2113]: E0710 00:30:44.272610 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:44.273295 containerd[1430]: time="2025-07-10T00:30:44.273018288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:44.373818 kubelet[2113]: E0710 00:30:44.373783 2113 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.74:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 10 00:30:44.420032 kubelet[2113]: I0710 00:30:44.420006 2113 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:44.420445 kubelet[2113]: E0710 00:30:44.420415 2113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 10 00:30:44.722867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3587929466.mount: Deactivated successfully. Jul 10 00:30:44.728869 containerd[1430]: time="2025-07-10T00:30:44.728339762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:30:44.729697 containerd[1430]: time="2025-07-10T00:30:44.729672386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 10 00:30:44.731877 containerd[1430]: time="2025-07-10T00:30:44.731846935Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:30:44.733421 containerd[1430]: time="2025-07-10T00:30:44.733386108Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:30:44.734222 containerd[1430]: time="2025-07-10T00:30:44.734195376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:30:44.734514 containerd[1430]: time="2025-07-10T00:30:44.734487530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:30:44.734854 containerd[1430]: time="2025-07-10T00:30:44.734793092Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:30:44.737306 containerd[1430]: time="2025-07-10T00:30:44.737256914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:30:44.739106 containerd[1430]: time="2025-07-10T00:30:44.738775476Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.99542ms" Jul 10 00:30:44.739846 containerd[1430]: time="2025-07-10T00:30:44.739625645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 466.520231ms" Jul 10 00:30:44.742097 containerd[1430]: time="2025-07-10T00:30:44.742035239Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 476.765445ms" Jul 10 00:30:44.912354 containerd[1430]: time="2025-07-10T00:30:44.912253623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:44.912804 containerd[1430]: time="2025-07-10T00:30:44.912711585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:44.912937 containerd[1430]: time="2025-07-10T00:30:44.912857622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:44.913114 containerd[1430]: time="2025-07-10T00:30:44.913075017Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:44.914940 containerd[1430]: time="2025-07-10T00:30:44.914842350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:44.914940 containerd[1430]: time="2025-07-10T00:30:44.914895739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:44.914940 containerd[1430]: time="2025-07-10T00:30:44.914907145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:44.915170 containerd[1430]: time="2025-07-10T00:30:44.915006877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:44.916162 containerd[1430]: time="2025-07-10T00:30:44.915376073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:44.916162 containerd[1430]: time="2025-07-10T00:30:44.915416574Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:44.916162 containerd[1430]: time="2025-07-10T00:30:44.915432702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:44.916162 containerd[1430]: time="2025-07-10T00:30:44.915499058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:44.939223 systemd[1]: Started cri-containerd-33f33077874ffe9f114a236de397559b3b9e4172fce9a19e46e4f5bf94161a9d.scope - libcontainer container 33f33077874ffe9f114a236de397559b3b9e4172fce9a19e46e4f5bf94161a9d. Jul 10 00:30:44.943710 systemd[1]: Started cri-containerd-5194ca68ce5a20e162f3b8b5596946cefba3e988bb7bd0b6359e974912709597.scope - libcontainer container 5194ca68ce5a20e162f3b8b5596946cefba3e988bb7bd0b6359e974912709597. Jul 10 00:30:44.945666 systemd[1]: Started cri-containerd-ed7ef898db32ce14643816c6b25ca1a533de9588642c7052bff7369b93aa7214.scope - libcontainer container ed7ef898db32ce14643816c6b25ca1a533de9588642c7052bff7369b93aa7214. Jul 10 00:30:44.979423 containerd[1430]: time="2025-07-10T00:30:44.979313577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"33f33077874ffe9f114a236de397559b3b9e4172fce9a19e46e4f5bf94161a9d\"" Jul 10 00:30:44.982433 kubelet[2113]: E0710 00:30:44.982379 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:44.984195 containerd[1430]: time="2025-07-10T00:30:44.984114034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed7ef898db32ce14643816c6b25ca1a533de9588642c7052bff7369b93aa7214\"" Jul 10 00:30:44.985234 kubelet[2113]: E0710 00:30:44.985089 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:44.986709 containerd[1430]: time="2025-07-10T00:30:44.986642850Z" level=info msg="CreateContainer within sandbox \"33f33077874ffe9f114a236de397559b3b9e4172fce9a19e46e4f5bf94161a9d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:30:44.988240 containerd[1430]: time="2025-07-10T00:30:44.988202594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:146aec53a53e1e7b658034475da7f4e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5194ca68ce5a20e162f3b8b5596946cefba3e988bb7bd0b6359e974912709597\"" Jul 10 00:30:44.989460 kubelet[2113]: E0710 00:30:44.989382 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:44.990746 containerd[1430]: time="2025-07-10T00:30:44.990709879Z" level=info msg="CreateContainer within sandbox \"ed7ef898db32ce14643816c6b25ca1a533de9588642c7052bff7369b93aa7214\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:30:44.993535 containerd[1430]: time="2025-07-10T00:30:44.993449927Z" level=info msg="CreateContainer within sandbox \"5194ca68ce5a20e162f3b8b5596946cefba3e988bb7bd0b6359e974912709597\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:30:44.994024 kubelet[2113]: E0710 00:30:44.993992 2113 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.74:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.74:6443: connect: connection refused" interval="1.6s" Jul 10 00:30:45.012630 containerd[1430]: time="2025-07-10T00:30:45.012501198Z" level=info msg="CreateContainer within sandbox \"33f33077874ffe9f114a236de397559b3b9e4172fce9a19e46e4f5bf94161a9d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e22941e71bb801fe70f369ad569da78ba49c9d9d1560b38cff3634eae1988637\"" Jul 10 00:30:45.014419 containerd[1430]: time="2025-07-10T00:30:45.013295605Z" level=info msg="StartContainer for \"e22941e71bb801fe70f369ad569da78ba49c9d9d1560b38cff3634eae1988637\"" Jul 10 00:30:45.014419 containerd[1430]: time="2025-07-10T00:30:45.013726204Z" level=info msg="CreateContainer within sandbox \"ed7ef898db32ce14643816c6b25ca1a533de9588642c7052bff7369b93aa7214\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"143711f922e922cc5e889eb68385da96c57e26d25d7975b46bd29d047ad33939\"" Jul 10 00:30:45.014419 containerd[1430]: time="2025-07-10T00:30:45.014092373Z" level=info msg="StartContainer for \"143711f922e922cc5e889eb68385da96c57e26d25d7975b46bd29d047ad33939\"" Jul 10 00:30:45.019474 containerd[1430]: time="2025-07-10T00:30:45.019421758Z" level=info msg="CreateContainer within sandbox \"5194ca68ce5a20e162f3b8b5596946cefba3e988bb7bd0b6359e974912709597\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7ffe9ef74ce5c146bee8f2441ac51656da42c4fb50e5d5fd4517b46ab0709000\"" Jul 10 00:30:45.020027 containerd[1430]: time="2025-07-10T00:30:45.019992542Z" level=info msg="StartContainer for \"7ffe9ef74ce5c146bee8f2441ac51656da42c4fb50e5d5fd4517b46ab0709000\"" Jul 10 00:30:45.040313 systemd[1]: Started cri-containerd-e22941e71bb801fe70f369ad569da78ba49c9d9d1560b38cff3634eae1988637.scope - libcontainer container e22941e71bb801fe70f369ad569da78ba49c9d9d1560b38cff3634eae1988637. Jul 10 00:30:45.044271 systemd[1]: Started cri-containerd-143711f922e922cc5e889eb68385da96c57e26d25d7975b46bd29d047ad33939.scope - libcontainer container 143711f922e922cc5e889eb68385da96c57e26d25d7975b46bd29d047ad33939. Jul 10 00:30:45.045830 systemd[1]: Started cri-containerd-7ffe9ef74ce5c146bee8f2441ac51656da42c4fb50e5d5fd4517b46ab0709000.scope - libcontainer container 7ffe9ef74ce5c146bee8f2441ac51656da42c4fb50e5d5fd4517b46ab0709000. Jul 10 00:30:45.062428 kubelet[2113]: E0710 00:30:45.062386 2113 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.74:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 10 00:30:45.122684 containerd[1430]: time="2025-07-10T00:30:45.117541367Z" level=info msg="StartContainer for \"7ffe9ef74ce5c146bee8f2441ac51656da42c4fb50e5d5fd4517b46ab0709000\" returns successfully" Jul 10 00:30:45.122684 containerd[1430]: time="2025-07-10T00:30:45.117677910Z" level=info msg="StartContainer for \"143711f922e922cc5e889eb68385da96c57e26d25d7975b46bd29d047ad33939\" returns successfully" Jul 10 00:30:45.122684 containerd[1430]: time="2025-07-10T00:30:45.117703882Z" level=info msg="StartContainer for \"e22941e71bb801fe70f369ad569da78ba49c9d9d1560b38cff3634eae1988637\" returns successfully" Jul 10 00:30:45.143104 kubelet[2113]: E0710 00:30:45.137965 2113 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.74:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 10 00:30:45.164277 kubelet[2113]: E0710 00:30:45.160498 2113 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.74:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.74:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 10 00:30:45.224609 kubelet[2113]: I0710 00:30:45.224451 2113 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:45.224915 kubelet[2113]: E0710 00:30:45.224847 2113 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.74:6443/api/v1/nodes\": dial tcp 10.0.0.74:6443: connect: connection refused" node="localhost" Jul 10 00:30:45.619370 kubelet[2113]: E0710 00:30:45.619340 2113 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:45.619478 kubelet[2113]: E0710 00:30:45.619464 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:45.621514 kubelet[2113]: E0710 00:30:45.621492 2113 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:45.621620 kubelet[2113]: E0710 00:30:45.621604 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:45.622931 kubelet[2113]: E0710 00:30:45.622909 2113 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:45.623052 kubelet[2113]: E0710 00:30:45.623028 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:46.626756 kubelet[2113]: E0710 00:30:46.626709 2113 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:46.627113 kubelet[2113]: E0710 00:30:46.626829 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:46.627113 kubelet[2113]: E0710 00:30:46.626992 2113 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:46.627550 kubelet[2113]: E0710 00:30:46.627185 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:46.627550 kubelet[2113]: E0710 00:30:46.627350 2113 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 10 00:30:46.627550 kubelet[2113]: E0710 00:30:46.627463 2113 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:46.719535 kubelet[2113]: E0710 00:30:46.719485 2113 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:30:46.827089 kubelet[2113]: I0710 00:30:46.827057 2113 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:46.839461 kubelet[2113]: I0710 00:30:46.839395 2113 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:30:46.839461 kubelet[2113]: E0710 00:30:46.839435 2113 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 00:30:46.871548 kubelet[2113]: E0710 00:30:46.871507 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:46.971943 kubelet[2113]: E0710 00:30:46.971806 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:47.072931 kubelet[2113]: E0710 00:30:47.072886 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:47.173704 kubelet[2113]: E0710 00:30:47.173664 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:47.274806 kubelet[2113]: E0710 00:30:47.274689 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:47.375350 kubelet[2113]: E0710 00:30:47.375312 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:47.476418 kubelet[2113]: E0710 00:30:47.476369 2113 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:47.589471 kubelet[2113]: I0710 00:30:47.589367 2113 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:47.594199 kubelet[2113]: E0710 00:30:47.594070 2113 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:47.594199 kubelet[2113]: I0710 00:30:47.594095 2113 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:47.595871 kubelet[2113]: E0710 00:30:47.595826 2113 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:47.595871 kubelet[2113]: I0710 00:30:47.595854 2113 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:47.597324 kubelet[2113]: E0710 00:30:47.597297 2113 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:48.569137 kubelet[2113]: I0710 00:30:48.568614 2113 apiserver.go:52] "Watching apiserver" Jul 10 00:30:48.592609 kubelet[2113]: I0710 00:30:48.592563 2113 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:30:48.990836 systemd[1]: Reloading requested from client PID 2400 ('systemctl') (unit session-7.scope)... Jul 10 00:30:48.990853 systemd[1]: Reloading... Jul 10 00:30:49.067158 zram_generator::config[2439]: No configuration found. Jul 10 00:30:49.155525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:30:49.223271 systemd[1]: Reloading finished in 232 ms. Jul 10 00:30:49.259115 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:49.274267 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:30:49.274513 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:49.274585 systemd[1]: kubelet.service: Consumed 1.436s CPU time, 128.3M memory peak, 0B memory swap peak. Jul 10 00:30:49.283435 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:30:49.398589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:30:49.401545 (kubelet)[2481]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:30:49.437175 kubelet[2481]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:30:49.437175 kubelet[2481]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 10 00:30:49.437175 kubelet[2481]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:30:49.437512 kubelet[2481]: I0710 00:30:49.437222 2481 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:30:49.443375 kubelet[2481]: I0710 00:30:49.443326 2481 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 10 00:30:49.443375 kubelet[2481]: I0710 00:30:49.443357 2481 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:30:49.443583 kubelet[2481]: I0710 00:30:49.443557 2481 server.go:956] "Client rotation is on, will bootstrap in background" Jul 10 00:30:49.444754 kubelet[2481]: I0710 00:30:49.444728 2481 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 10 00:30:49.448118 kubelet[2481]: I0710 00:30:49.448065 2481 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:30:49.457067 kubelet[2481]: E0710 00:30:49.454795 2481 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:30:49.457067 kubelet[2481]: I0710 00:30:49.454830 2481 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:30:49.457198 kubelet[2481]: I0710 00:30:49.457180 2481 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:30:49.457483 kubelet[2481]: I0710 00:30:49.457456 2481 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:30:49.457700 kubelet[2481]: I0710 00:30:49.457550 2481 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 10 00:30:49.457817 kubelet[2481]: I0710 00:30:49.457803 2481 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:30:49.457863 kubelet[2481]: I0710 00:30:49.457856 2481 container_manager_linux.go:303] "Creating device plugin manager" Jul 10 00:30:49.457962 kubelet[2481]: I0710 00:30:49.457951 2481 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:30:49.458196 kubelet[2481]: I0710 00:30:49.458172 2481 kubelet.go:480] "Attempting to sync node with API server" Jul 10 00:30:49.458288 kubelet[2481]: I0710 00:30:49.458276 2481 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:30:49.458444 kubelet[2481]: I0710 00:30:49.458430 2481 kubelet.go:386] "Adding apiserver pod source" Jul 10 00:30:49.458503 kubelet[2481]: I0710 00:30:49.458495 2481 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:30:49.459388 kubelet[2481]: I0710 00:30:49.459362 2481 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:30:49.460024 kubelet[2481]: I0710 00:30:49.460003 2481 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 10 00:30:49.464257 kubelet[2481]: I0710 00:30:49.462418 2481 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 10 00:30:49.464257 kubelet[2481]: I0710 00:30:49.462471 2481 server.go:1289] "Started kubelet" Jul 10 00:30:49.464257 kubelet[2481]: I0710 00:30:49.462731 2481 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:30:49.464257 kubelet[2481]: I0710 00:30:49.462829 2481 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:30:49.464257 kubelet[2481]: I0710 00:30:49.463151 2481 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:30:49.464257 kubelet[2481]: I0710 00:30:49.463549 2481 server.go:317] "Adding debug handlers to kubelet server" Jul 10 00:30:49.466854 kubelet[2481]: I0710 00:30:49.466830 2481 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:30:49.468790 kubelet[2481]: E0710 00:30:49.468566 2481 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:30:49.468790 kubelet[2481]: I0710 00:30:49.468793 2481 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 10 00:30:49.468983 kubelet[2481]: I0710 00:30:49.468962 2481 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 10 00:30:49.469124 kubelet[2481]: I0710 00:30:49.469109 2481 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:30:49.469923 kubelet[2481]: E0710 00:30:49.469896 2481 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:30:49.471137 kubelet[2481]: I0710 00:30:49.471119 2481 factory.go:223] Registration of the containerd container factory successfully Jul 10 00:30:49.471137 kubelet[2481]: I0710 00:30:49.471135 2481 factory.go:223] Registration of the systemd container factory successfully Jul 10 00:30:49.471203 kubelet[2481]: I0710 00:30:49.471194 2481 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:30:49.476674 kubelet[2481]: I0710 00:30:49.473962 2481 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:30:49.503782 kubelet[2481]: I0710 00:30:49.503742 2481 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 10 00:30:49.507753 kubelet[2481]: I0710 00:30:49.506679 2481 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 10 00:30:49.507753 kubelet[2481]: I0710 00:30:49.506707 2481 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 10 00:30:49.507753 kubelet[2481]: I0710 00:30:49.506724 2481 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 10 00:30:49.507753 kubelet[2481]: I0710 00:30:49.506731 2481 kubelet.go:2436] "Starting kubelet main sync loop" Jul 10 00:30:49.507753 kubelet[2481]: E0710 00:30:49.506780 2481 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:30:49.524981 kubelet[2481]: I0710 00:30:49.523774 2481 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 10 00:30:49.524981 kubelet[2481]: I0710 00:30:49.523800 2481 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 10 00:30:49.524981 kubelet[2481]: I0710 00:30:49.523823 2481 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:30:49.524981 kubelet[2481]: I0710 00:30:49.524642 2481 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:30:49.524981 kubelet[2481]: I0710 00:30:49.524661 2481 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:30:49.524981 kubelet[2481]: I0710 00:30:49.524679 2481 policy_none.go:49] "None policy: Start" Jul 10 00:30:49.524981 kubelet[2481]: I0710 00:30:49.524705 2481 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 10 00:30:49.524981 kubelet[2481]: I0710 00:30:49.524718 2481 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:30:49.524981 kubelet[2481]: I0710 00:30:49.524828 2481 state_mem.go:75] "Updated machine memory state" Jul 10 00:30:49.530226 kubelet[2481]: E0710 00:30:49.530192 2481 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 10 00:30:49.530404 kubelet[2481]: I0710 00:30:49.530384 2481 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:30:49.530457 kubelet[2481]: I0710 00:30:49.530402 2481 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:30:49.530998 kubelet[2481]: I0710 00:30:49.530616 2481 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:30:49.531582 kubelet[2481]: E0710 00:30:49.531552 2481 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 10 00:30:49.608114 kubelet[2481]: I0710 00:30:49.608074 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:49.608114 kubelet[2481]: I0710 00:30:49.608097 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:49.608282 kubelet[2481]: I0710 00:30:49.608151 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:49.634540 kubelet[2481]: I0710 00:30:49.634509 2481 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 10 00:30:49.643263 kubelet[2481]: I0710 00:30:49.642563 2481 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 10 00:30:49.643263 kubelet[2481]: I0710 00:30:49.642650 2481 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 10 00:30:49.670681 kubelet[2481]: I0710 00:30:49.670613 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:49.670681 kubelet[2481]: I0710 00:30:49.670662 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:49.670681 kubelet[2481]: I0710 00:30:49.670691 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:49.670887 kubelet[2481]: I0710 00:30:49.670708 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:49.670887 kubelet[2481]: I0710 00:30:49.670729 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:30:49.670887 kubelet[2481]: I0710 00:30:49.670746 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/146aec53a53e1e7b658034475da7f4e1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"146aec53a53e1e7b658034475da7f4e1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:49.670887 kubelet[2481]: I0710 00:30:49.670762 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/146aec53a53e1e7b658034475da7f4e1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"146aec53a53e1e7b658034475da7f4e1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:49.670887 kubelet[2481]: I0710 00:30:49.670778 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:49.671006 kubelet[2481]: I0710 00:30:49.670796 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/146aec53a53e1e7b658034475da7f4e1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"146aec53a53e1e7b658034475da7f4e1\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:49.915066 kubelet[2481]: E0710 00:30:49.914947 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:49.915182 kubelet[2481]: E0710 00:30:49.915124 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:49.915182 kubelet[2481]: E0710 00:30:49.914951 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:50.459071 kubelet[2481]: I0710 00:30:50.458897 2481 apiserver.go:52] "Watching apiserver" Jul 10 00:30:50.469141 kubelet[2481]: I0710 00:30:50.469065 2481 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 10 00:30:50.516099 kubelet[2481]: I0710 00:30:50.515207 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:50.516099 kubelet[2481]: E0710 00:30:50.515429 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:50.517174 kubelet[2481]: I0710 00:30:50.517142 2481 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:50.523204 kubelet[2481]: E0710 00:30:50.522393 2481 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 10 00:30:50.523204 kubelet[2481]: E0710 00:30:50.522568 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:50.525612 kubelet[2481]: E0710 00:30:50.525574 2481 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 00:30:50.525872 kubelet[2481]: E0710 00:30:50.525854 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:50.568413 kubelet[2481]: I0710 00:30:50.568172 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.568153196 podStartE2EDuration="1.568153196s" podCreationTimestamp="2025-07-10 00:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:50.561848832 +0000 UTC m=+1.156929994" watchObservedRunningTime="2025-07-10 00:30:50.568153196 +0000 UTC m=+1.163234318" Jul 10 00:30:50.568413 kubelet[2481]: I0710 00:30:50.568307 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.568301435 podStartE2EDuration="1.568301435s" podCreationTimestamp="2025-07-10 00:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:50.550768542 +0000 UTC m=+1.145849704" watchObservedRunningTime="2025-07-10 00:30:50.568301435 +0000 UTC m=+1.163382597" Jul 10 00:30:50.577552 kubelet[2481]: I0710 00:30:50.577415 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.577398447 podStartE2EDuration="1.577398447s" podCreationTimestamp="2025-07-10 00:30:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:50.575662195 +0000 UTC m=+1.170743357" watchObservedRunningTime="2025-07-10 00:30:50.577398447 +0000 UTC m=+1.172479609" Jul 10 00:30:51.517568 kubelet[2481]: E0710 00:30:51.517519 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:51.517897 kubelet[2481]: E0710 00:30:51.517522 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:51.517897 kubelet[2481]: E0710 00:30:51.517637 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:52.519323 kubelet[2481]: E0710 00:30:52.519287 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:52.733841 kubelet[2481]: E0710 00:30:52.733808 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:53.878486 kubelet[2481]: I0710 00:30:53.878339 2481 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:30:53.879519 kubelet[2481]: I0710 00:30:53.878830 2481 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:30:53.879551 containerd[1430]: time="2025-07-10T00:30:53.878618372Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:30:54.547910 systemd[1]: Created slice kubepods-besteffort-pod341297bc_2e6f_4875_8273_8c6981a024c4.slice - libcontainer container kubepods-besteffort-pod341297bc_2e6f_4875_8273_8c6981a024c4.slice. Jul 10 00:30:54.609841 kubelet[2481]: I0710 00:30:54.609771 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/341297bc-2e6f-4875-8273-8c6981a024c4-var-lib-calico\") pod \"tigera-operator-747864d56d-c9dlk\" (UID: \"341297bc-2e6f-4875-8273-8c6981a024c4\") " pod="tigera-operator/tigera-operator-747864d56d-c9dlk" Jul 10 00:30:54.609841 kubelet[2481]: I0710 00:30:54.609822 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28ngw\" (UniqueName: \"kubernetes.io/projected/341297bc-2e6f-4875-8273-8c6981a024c4-kube-api-access-28ngw\") pod \"tigera-operator-747864d56d-c9dlk\" (UID: \"341297bc-2e6f-4875-8273-8c6981a024c4\") " pod="tigera-operator/tigera-operator-747864d56d-c9dlk" Jul 10 00:30:54.716675 systemd[1]: Created slice kubepods-besteffort-pod7e794924_c68c_4b11_a18b_046fadeb17a4.slice - libcontainer container kubepods-besteffort-pod7e794924_c68c_4b11_a18b_046fadeb17a4.slice. Jul 10 00:30:54.810827 kubelet[2481]: I0710 00:30:54.810689 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e794924-c68c-4b11-a18b-046fadeb17a4-kube-proxy\") pod \"kube-proxy-2swg7\" (UID: \"7e794924-c68c-4b11-a18b-046fadeb17a4\") " pod="kube-system/kube-proxy-2swg7" Jul 10 00:30:54.810827 kubelet[2481]: I0710 00:30:54.810734 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9cvf\" (UniqueName: \"kubernetes.io/projected/7e794924-c68c-4b11-a18b-046fadeb17a4-kube-api-access-f9cvf\") pod \"kube-proxy-2swg7\" (UID: \"7e794924-c68c-4b11-a18b-046fadeb17a4\") " pod="kube-system/kube-proxy-2swg7" Jul 10 00:30:54.810827 kubelet[2481]: I0710 00:30:54.810756 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e794924-c68c-4b11-a18b-046fadeb17a4-lib-modules\") pod \"kube-proxy-2swg7\" (UID: \"7e794924-c68c-4b11-a18b-046fadeb17a4\") " pod="kube-system/kube-proxy-2swg7" Jul 10 00:30:54.810827 kubelet[2481]: I0710 00:30:54.810775 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e794924-c68c-4b11-a18b-046fadeb17a4-xtables-lock\") pod \"kube-proxy-2swg7\" (UID: \"7e794924-c68c-4b11-a18b-046fadeb17a4\") " pod="kube-system/kube-proxy-2swg7" Jul 10 00:30:54.859008 containerd[1430]: time="2025-07-10T00:30:54.858965305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-c9dlk,Uid:341297bc-2e6f-4875-8273-8c6981a024c4,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:30:54.884359 containerd[1430]: time="2025-07-10T00:30:54.884112369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:54.884359 containerd[1430]: time="2025-07-10T00:30:54.884173340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:54.884359 containerd[1430]: time="2025-07-10T00:30:54.884201225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:54.884359 containerd[1430]: time="2025-07-10T00:30:54.884287881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:54.912252 systemd[1]: Started cri-containerd-93de621cdaaa26811a0d29e60d0f31e3f021c34bd81ecd966eaccb6c289fe0e7.scope - libcontainer container 93de621cdaaa26811a0d29e60d0f31e3f021c34bd81ecd966eaccb6c289fe0e7. Jul 10 00:30:54.948320 containerd[1430]: time="2025-07-10T00:30:54.948241695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-c9dlk,Uid:341297bc-2e6f-4875-8273-8c6981a024c4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"93de621cdaaa26811a0d29e60d0f31e3f021c34bd81ecd966eaccb6c289fe0e7\"" Jul 10 00:30:54.952248 containerd[1430]: time="2025-07-10T00:30:54.952026653Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:30:55.024192 kubelet[2481]: E0710 00:30:55.024145 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:55.025874 containerd[1430]: time="2025-07-10T00:30:55.025829670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2swg7,Uid:7e794924-c68c-4b11-a18b-046fadeb17a4,Namespace:kube-system,Attempt:0,}" Jul 10 00:30:55.048029 containerd[1430]: time="2025-07-10T00:30:55.047730020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:55.048029 containerd[1430]: time="2025-07-10T00:30:55.047799152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:55.048029 containerd[1430]: time="2025-07-10T00:30:55.047814514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:55.048029 containerd[1430]: time="2025-07-10T00:30:55.047898089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:55.071308 systemd[1]: Started cri-containerd-ed045d3a29ecf8f2f5a60093a4cd8bcb536e703ca4a68b660a79237e2dd028d9.scope - libcontainer container ed045d3a29ecf8f2f5a60093a4cd8bcb536e703ca4a68b660a79237e2dd028d9. Jul 10 00:30:55.095395 containerd[1430]: time="2025-07-10T00:30:55.095349086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2swg7,Uid:7e794924-c68c-4b11-a18b-046fadeb17a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed045d3a29ecf8f2f5a60093a4cd8bcb536e703ca4a68b660a79237e2dd028d9\"" Jul 10 00:30:55.096108 kubelet[2481]: E0710 00:30:55.096078 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:55.105957 containerd[1430]: time="2025-07-10T00:30:55.105915756Z" level=info msg="CreateContainer within sandbox \"ed045d3a29ecf8f2f5a60093a4cd8bcb536e703ca4a68b660a79237e2dd028d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:30:55.121185 containerd[1430]: time="2025-07-10T00:30:55.121105769Z" level=info msg="CreateContainer within sandbox \"ed045d3a29ecf8f2f5a60093a4cd8bcb536e703ca4a68b660a79237e2dd028d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"49e82f8252704a167a0c9e35680fed3d9ee40eacad518143a66170c49acec455\"" Jul 10 00:30:55.121893 containerd[1430]: time="2025-07-10T00:30:55.121775802Z" level=info msg="StartContainer for \"49e82f8252704a167a0c9e35680fed3d9ee40eacad518143a66170c49acec455\"" Jul 10 00:30:55.145229 systemd[1]: Started cri-containerd-49e82f8252704a167a0c9e35680fed3d9ee40eacad518143a66170c49acec455.scope - libcontainer container 49e82f8252704a167a0c9e35680fed3d9ee40eacad518143a66170c49acec455. Jul 10 00:30:55.174358 containerd[1430]: time="2025-07-10T00:30:55.174253931Z" level=info msg="StartContainer for \"49e82f8252704a167a0c9e35680fed3d9ee40eacad518143a66170c49acec455\" returns successfully" Jul 10 00:30:55.420269 kubelet[2481]: E0710 00:30:55.420236 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:55.531656 kubelet[2481]: E0710 00:30:55.531412 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:55.533308 kubelet[2481]: E0710 00:30:55.533280 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:55.550760 kubelet[2481]: I0710 00:30:55.549244 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2swg7" podStartSLOduration=1.549226126 podStartE2EDuration="1.549226126s" podCreationTimestamp="2025-07-10 00:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:55.549219485 +0000 UTC m=+6.144300647" watchObservedRunningTime="2025-07-10 00:30:55.549226126 +0000 UTC m=+6.144307248" Jul 10 00:30:56.360753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250789916.mount: Deactivated successfully. Jul 10 00:30:56.544961 kubelet[2481]: E0710 00:30:56.544920 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:57.947673 kubelet[2481]: E0710 00:30:57.947611 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:58.094369 containerd[1430]: time="2025-07-10T00:30:58.094313153Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:58.095134 containerd[1430]: time="2025-07-10T00:30:58.095094985Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 10 00:30:58.095851 containerd[1430]: time="2025-07-10T00:30:58.095813969Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:58.098323 containerd[1430]: time="2025-07-10T00:30:58.098288964Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:58.098931 containerd[1430]: time="2025-07-10T00:30:58.098896412Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 3.1467661s" Jul 10 00:30:58.098966 containerd[1430]: time="2025-07-10T00:30:58.098933457Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 10 00:30:58.108368 containerd[1430]: time="2025-07-10T00:30:58.108256637Z" level=info msg="CreateContainer within sandbox \"93de621cdaaa26811a0d29e60d0f31e3f021c34bd81ecd966eaccb6c289fe0e7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:30:58.119440 containerd[1430]: time="2025-07-10T00:30:58.119399358Z" level=info msg="CreateContainer within sandbox \"93de621cdaaa26811a0d29e60d0f31e3f021c34bd81ecd966eaccb6c289fe0e7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7d8cda4a0fd37f66e44ceca4fc08be3b319e1f649ccad247c34e8dae0eaff328\"" Jul 10 00:30:58.119996 containerd[1430]: time="2025-07-10T00:30:58.119969880Z" level=info msg="StartContainer for \"7d8cda4a0fd37f66e44ceca4fc08be3b319e1f649ccad247c34e8dae0eaff328\"" Jul 10 00:30:58.149258 systemd[1]: Started cri-containerd-7d8cda4a0fd37f66e44ceca4fc08be3b319e1f649ccad247c34e8dae0eaff328.scope - libcontainer container 7d8cda4a0fd37f66e44ceca4fc08be3b319e1f649ccad247c34e8dae0eaff328. Jul 10 00:30:58.177157 containerd[1430]: time="2025-07-10T00:30:58.177117094Z" level=info msg="StartContainer for \"7d8cda4a0fd37f66e44ceca4fc08be3b319e1f649ccad247c34e8dae0eaff328\" returns successfully" Jul 10 00:30:58.550378 kubelet[2481]: E0710 00:30:58.549254 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:58.560908 kubelet[2481]: I0710 00:30:58.560850 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-c9dlk" podStartSLOduration=1.40822745 podStartE2EDuration="4.560833764s" podCreationTimestamp="2025-07-10 00:30:54 +0000 UTC" firstStartedPulling="2025-07-10 00:30:54.951718318 +0000 UTC m=+5.546799440" lastFinishedPulling="2025-07-10 00:30:58.104324632 +0000 UTC m=+8.699405754" observedRunningTime="2025-07-10 00:30:58.560719148 +0000 UTC m=+9.155800310" watchObservedRunningTime="2025-07-10 00:30:58.560833764 +0000 UTC m=+9.155914886" Jul 10 00:30:59.580415 kubelet[2481]: E0710 00:30:59.580308 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:02.743387 kubelet[2481]: E0710 00:31:02.743331 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:03.063467 update_engine[1421]: I20250710 00:31:03.063122 1421 update_attempter.cc:509] Updating boot flags... Jul 10 00:31:03.112068 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2873) Jul 10 00:31:03.159119 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2877) Jul 10 00:31:03.578307 sudo[1611]: pam_unix(sudo:session): session closed for user root Jul 10 00:31:03.590807 sshd[1608]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:03.595035 systemd[1]: sshd@6-10.0.0.74:22-10.0.0.1:33344.service: Deactivated successfully. Jul 10 00:31:03.597038 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:31:03.597297 systemd[1]: session-7.scope: Consumed 7.529s CPU time, 153.5M memory peak, 0B memory swap peak. Jul 10 00:31:03.598136 systemd-logind[1417]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:31:03.601184 systemd-logind[1417]: Removed session 7. Jul 10 00:31:08.488582 systemd[1]: Created slice kubepods-besteffort-pod44a594e0_90b7_49e2_a4d9_030a3f31c91b.slice - libcontainer container kubepods-besteffort-pod44a594e0_90b7_49e2_a4d9_030a3f31c91b.slice. Jul 10 00:31:08.517411 kubelet[2481]: I0710 00:31:08.517362 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5fzt\" (UniqueName: \"kubernetes.io/projected/44a594e0-90b7-49e2-a4d9-030a3f31c91b-kube-api-access-p5fzt\") pod \"calico-typha-58857d786b-jfccl\" (UID: \"44a594e0-90b7-49e2-a4d9-030a3f31c91b\") " pod="calico-system/calico-typha-58857d786b-jfccl" Jul 10 00:31:08.517411 kubelet[2481]: I0710 00:31:08.517416 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44a594e0-90b7-49e2-a4d9-030a3f31c91b-tigera-ca-bundle\") pod \"calico-typha-58857d786b-jfccl\" (UID: \"44a594e0-90b7-49e2-a4d9-030a3f31c91b\") " pod="calico-system/calico-typha-58857d786b-jfccl" Jul 10 00:31:08.517411 kubelet[2481]: I0710 00:31:08.517434 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/44a594e0-90b7-49e2-a4d9-030a3f31c91b-typha-certs\") pod \"calico-typha-58857d786b-jfccl\" (UID: \"44a594e0-90b7-49e2-a4d9-030a3f31c91b\") " pod="calico-system/calico-typha-58857d786b-jfccl" Jul 10 00:31:08.794330 kubelet[2481]: E0710 00:31:08.794200 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:08.797069 containerd[1430]: time="2025-07-10T00:31:08.797013154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58857d786b-jfccl,Uid:44a594e0-90b7-49e2-a4d9-030a3f31c91b,Namespace:calico-system,Attempt:0,}" Jul 10 00:31:08.911198 containerd[1430]: time="2025-07-10T00:31:08.909580183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:08.911198 containerd[1430]: time="2025-07-10T00:31:08.909648789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:08.911198 containerd[1430]: time="2025-07-10T00:31:08.909665030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:08.911198 containerd[1430]: time="2025-07-10T00:31:08.909779240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:08.942277 systemd[1]: Started cri-containerd-68c39754069b74c92a10ad36a6491143090c9cb7df886c2b14b5a95c979d628b.scope - libcontainer container 68c39754069b74c92a10ad36a6491143090c9cb7df886c2b14b5a95c979d628b. Jul 10 00:31:08.966540 systemd[1]: Created slice kubepods-besteffort-podaf26075f_dab6_4913_a64f_1175e5858514.slice - libcontainer container kubepods-besteffort-podaf26075f_dab6_4913_a64f_1175e5858514.slice. Jul 10 00:31:09.021282 kubelet[2481]: I0710 00:31:09.021194 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/af26075f-dab6-4913-a64f-1175e5858514-policysync\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021282 kubelet[2481]: I0710 00:31:09.021272 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/af26075f-dab6-4913-a64f-1175e5858514-cni-log-dir\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021282 kubelet[2481]: I0710 00:31:09.021292 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af26075f-dab6-4913-a64f-1175e5858514-tigera-ca-bundle\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021627 kubelet[2481]: I0710 00:31:09.021311 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/af26075f-dab6-4913-a64f-1175e5858514-cni-bin-dir\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021627 kubelet[2481]: I0710 00:31:09.021329 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af26075f-dab6-4913-a64f-1175e5858514-var-lib-calico\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021627 kubelet[2481]: I0710 00:31:09.021345 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/af26075f-dab6-4913-a64f-1175e5858514-var-run-calico\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021627 kubelet[2481]: I0710 00:31:09.021360 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnsjm\" (UniqueName: \"kubernetes.io/projected/af26075f-dab6-4913-a64f-1175e5858514-kube-api-access-wnsjm\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021627 kubelet[2481]: I0710 00:31:09.021376 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/af26075f-dab6-4913-a64f-1175e5858514-cni-net-dir\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021894 kubelet[2481]: I0710 00:31:09.021390 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/af26075f-dab6-4913-a64f-1175e5858514-flexvol-driver-host\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021894 kubelet[2481]: I0710 00:31:09.021416 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af26075f-dab6-4913-a64f-1175e5858514-lib-modules\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021894 kubelet[2481]: I0710 00:31:09.021432 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/af26075f-dab6-4913-a64f-1175e5858514-node-certs\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.021894 kubelet[2481]: I0710 00:31:09.021447 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af26075f-dab6-4913-a64f-1175e5858514-xtables-lock\") pod \"calico-node-nnd7m\" (UID: \"af26075f-dab6-4913-a64f-1175e5858514\") " pod="calico-system/calico-node-nnd7m" Jul 10 00:31:09.035121 containerd[1430]: time="2025-07-10T00:31:09.035008472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58857d786b-jfccl,Uid:44a594e0-90b7-49e2-a4d9-030a3f31c91b,Namespace:calico-system,Attempt:0,} returns sandbox id \"68c39754069b74c92a10ad36a6491143090c9cb7df886c2b14b5a95c979d628b\"" Jul 10 00:31:09.035756 kubelet[2481]: E0710 00:31:09.035726 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:09.037652 containerd[1430]: time="2025-07-10T00:31:09.037614407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:31:09.107548 kubelet[2481]: E0710 00:31:09.104439 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8l5v7" podUID="08557a49-6fcf-4236-a001-85a4edaa7064" Jul 10 00:31:09.139981 kubelet[2481]: E0710 00:31:09.139947 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.140300 kubelet[2481]: W0710 00:31:09.140171 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.144511 kubelet[2481]: E0710 00:31:09.144440 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.200443 kubelet[2481]: E0710 00:31:09.200312 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.200443 kubelet[2481]: W0710 00:31:09.200339 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.200443 kubelet[2481]: E0710 00:31:09.200361 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.200699 kubelet[2481]: E0710 00:31:09.200686 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.206723 kubelet[2481]: W0710 00:31:09.200754 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.207005 kubelet[2481]: E0710 00:31:09.206858 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.207155 kubelet[2481]: E0710 00:31:09.207140 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.207329 kubelet[2481]: W0710 00:31:09.207223 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.207329 kubelet[2481]: E0710 00:31:09.207241 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.207583 kubelet[2481]: E0710 00:31:09.207464 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.207583 kubelet[2481]: W0710 00:31:09.207477 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.207583 kubelet[2481]: E0710 00:31:09.207486 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.207758 kubelet[2481]: E0710 00:31:09.207744 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.207815 kubelet[2481]: W0710 00:31:09.207805 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.207868 kubelet[2481]: E0710 00:31:09.207858 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.208106 kubelet[2481]: E0710 00:31:09.208094 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.208198 kubelet[2481]: W0710 00:31:09.208185 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.208250 kubelet[2481]: E0710 00:31:09.208239 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.208459 kubelet[2481]: E0710 00:31:09.208446 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.208525 kubelet[2481]: W0710 00:31:09.208515 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.208658 kubelet[2481]: E0710 00:31:09.208573 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.208769 kubelet[2481]: E0710 00:31:09.208757 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.208830 kubelet[2481]: W0710 00:31:09.208818 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.208886 kubelet[2481]: E0710 00:31:09.208876 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.209128 kubelet[2481]: E0710 00:31:09.209114 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.209216 kubelet[2481]: W0710 00:31:09.209203 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.209268 kubelet[2481]: E0710 00:31:09.209258 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.209588 kubelet[2481]: E0710 00:31:09.209498 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.209588 kubelet[2481]: W0710 00:31:09.209508 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.209588 kubelet[2481]: E0710 00:31:09.209517 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.209746 kubelet[2481]: E0710 00:31:09.209734 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.209794 kubelet[2481]: W0710 00:31:09.209783 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.209840 kubelet[2481]: E0710 00:31:09.209830 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.210063 kubelet[2481]: E0710 00:31:09.210030 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.210144 kubelet[2481]: W0710 00:31:09.210130 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.210296 kubelet[2481]: E0710 00:31:09.210200 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.210412 kubelet[2481]: E0710 00:31:09.210400 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.210468 kubelet[2481]: W0710 00:31:09.210458 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.210520 kubelet[2481]: E0710 00:31:09.210510 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.210747 kubelet[2481]: E0710 00:31:09.210735 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.210819 kubelet[2481]: W0710 00:31:09.210807 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.210869 kubelet[2481]: E0710 00:31:09.210860 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.211083 kubelet[2481]: E0710 00:31:09.211071 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.211269 kubelet[2481]: W0710 00:31:09.211145 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.211269 kubelet[2481]: E0710 00:31:09.211171 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.211410 kubelet[2481]: E0710 00:31:09.211398 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.211472 kubelet[2481]: W0710 00:31:09.211459 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.211520 kubelet[2481]: E0710 00:31:09.211511 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.211750 kubelet[2481]: E0710 00:31:09.211738 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.211817 kubelet[2481]: W0710 00:31:09.211806 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.211870 kubelet[2481]: E0710 00:31:09.211861 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.212097 kubelet[2481]: E0710 00:31:09.212085 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.212191 kubelet[2481]: W0710 00:31:09.212177 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.212324 kubelet[2481]: E0710 00:31:09.212240 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.212430 kubelet[2481]: E0710 00:31:09.212419 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.212499 kubelet[2481]: W0710 00:31:09.212487 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.212547 kubelet[2481]: E0710 00:31:09.212538 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.212760 kubelet[2481]: E0710 00:31:09.212747 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.212880 kubelet[2481]: W0710 00:31:09.212866 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.212997 kubelet[2481]: E0710 00:31:09.212921 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.223298 kubelet[2481]: E0710 00:31:09.223269 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.223298 kubelet[2481]: W0710 00:31:09.223289 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.223298 kubelet[2481]: E0710 00:31:09.223306 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.223455 kubelet[2481]: I0710 00:31:09.223334 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08557a49-6fcf-4236-a001-85a4edaa7064-socket-dir\") pod \"csi-node-driver-8l5v7\" (UID: \"08557a49-6fcf-4236-a001-85a4edaa7064\") " pod="calico-system/csi-node-driver-8l5v7" Jul 10 00:31:09.223571 kubelet[2481]: E0710 00:31:09.223558 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.223601 kubelet[2481]: W0710 00:31:09.223573 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.223601 kubelet[2481]: E0710 00:31:09.223582 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.223647 kubelet[2481]: I0710 00:31:09.223601 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08557a49-6fcf-4236-a001-85a4edaa7064-kubelet-dir\") pod \"csi-node-driver-8l5v7\" (UID: \"08557a49-6fcf-4236-a001-85a4edaa7064\") " pod="calico-system/csi-node-driver-8l5v7" Jul 10 00:31:09.223853 kubelet[2481]: E0710 00:31:09.223836 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.223892 kubelet[2481]: W0710 00:31:09.223854 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.223892 kubelet[2481]: E0710 00:31:09.223868 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.224443 kubelet[2481]: E0710 00:31:09.224383 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.224443 kubelet[2481]: W0710 00:31:09.224398 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.224443 kubelet[2481]: E0710 00:31:09.224411 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.225081 kubelet[2481]: E0710 00:31:09.224986 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.225081 kubelet[2481]: W0710 00:31:09.225077 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.225177 kubelet[2481]: E0710 00:31:09.225092 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.225177 kubelet[2481]: I0710 00:31:09.225119 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7kd9\" (UniqueName: \"kubernetes.io/projected/08557a49-6fcf-4236-a001-85a4edaa7064-kube-api-access-r7kd9\") pod \"csi-node-driver-8l5v7\" (UID: \"08557a49-6fcf-4236-a001-85a4edaa7064\") " pod="calico-system/csi-node-driver-8l5v7" Jul 10 00:31:09.225416 kubelet[2481]: E0710 00:31:09.225401 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.225416 kubelet[2481]: W0710 00:31:09.225415 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.225473 kubelet[2481]: E0710 00:31:09.225426 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.225572 kubelet[2481]: I0710 00:31:09.225531 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/08557a49-6fcf-4236-a001-85a4edaa7064-varrun\") pod \"csi-node-driver-8l5v7\" (UID: \"08557a49-6fcf-4236-a001-85a4edaa7064\") " pod="calico-system/csi-node-driver-8l5v7" Jul 10 00:31:09.225661 kubelet[2481]: E0710 00:31:09.225609 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.225661 kubelet[2481]: W0710 00:31:09.225625 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.225661 kubelet[2481]: E0710 00:31:09.225634 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.225813 kubelet[2481]: E0710 00:31:09.225799 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.225813 kubelet[2481]: W0710 00:31:09.225809 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.225857 kubelet[2481]: E0710 00:31:09.225818 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.226027 kubelet[2481]: E0710 00:31:09.226010 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.226027 kubelet[2481]: W0710 00:31:09.226022 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.226485 kubelet[2481]: E0710 00:31:09.226078 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.226485 kubelet[2481]: I0710 00:31:09.226108 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08557a49-6fcf-4236-a001-85a4edaa7064-registration-dir\") pod \"csi-node-driver-8l5v7\" (UID: \"08557a49-6fcf-4236-a001-85a4edaa7064\") " pod="calico-system/csi-node-driver-8l5v7" Jul 10 00:31:09.226485 kubelet[2481]: E0710 00:31:09.226366 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.226485 kubelet[2481]: W0710 00:31:09.226383 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.226485 kubelet[2481]: E0710 00:31:09.226396 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.226828 kubelet[2481]: E0710 00:31:09.226603 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.226859 kubelet[2481]: W0710 00:31:09.226831 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.226859 kubelet[2481]: E0710 00:31:09.226846 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.227118 kubelet[2481]: E0710 00:31:09.227101 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.227118 kubelet[2481]: W0710 00:31:09.227116 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.227181 kubelet[2481]: E0710 00:31:09.227127 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.227347 kubelet[2481]: E0710 00:31:09.227333 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.227379 kubelet[2481]: W0710 00:31:09.227346 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.227379 kubelet[2481]: E0710 00:31:09.227356 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.227552 kubelet[2481]: E0710 00:31:09.227539 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.227585 kubelet[2481]: W0710 00:31:09.227551 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.227585 kubelet[2481]: E0710 00:31:09.227561 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.227782 kubelet[2481]: E0710 00:31:09.227760 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.227782 kubelet[2481]: W0710 00:31:09.227773 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.227782 kubelet[2481]: E0710 00:31:09.227782 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.271548 containerd[1430]: time="2025-07-10T00:31:09.270942055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nnd7m,Uid:af26075f-dab6-4913-a64f-1175e5858514,Namespace:calico-system,Attempt:0,}" Jul 10 00:31:09.295666 containerd[1430]: time="2025-07-10T00:31:09.295561850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:09.295795 containerd[1430]: time="2025-07-10T00:31:09.295663739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:09.295795 containerd[1430]: time="2025-07-10T00:31:09.295691981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:09.295847 containerd[1430]: time="2025-07-10T00:31:09.295787189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:09.323257 systemd[1]: Started cri-containerd-54acd005fc73f85f48ab0608ea2e595d7f734d1dcd0f15b500f388e5cd8fcd10.scope - libcontainer container 54acd005fc73f85f48ab0608ea2e595d7f734d1dcd0f15b500f388e5cd8fcd10. Jul 10 00:31:09.328598 kubelet[2481]: E0710 00:31:09.328411 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.328598 kubelet[2481]: W0710 00:31:09.328435 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.328598 kubelet[2481]: E0710 00:31:09.328455 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.328890 kubelet[2481]: E0710 00:31:09.328876 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.329087 kubelet[2481]: W0710 00:31:09.328961 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.329087 kubelet[2481]: E0710 00:31:09.328980 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.329456 kubelet[2481]: E0710 00:31:09.329436 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.329557 kubelet[2481]: W0710 00:31:09.329510 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.329557 kubelet[2481]: E0710 00:31:09.329543 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.329948 kubelet[2481]: E0710 00:31:09.329929 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.330136 kubelet[2481]: W0710 00:31:09.330026 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.330136 kubelet[2481]: E0710 00:31:09.330062 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.330562 kubelet[2481]: E0710 00:31:09.330537 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.330720 kubelet[2481]: W0710 00:31:09.330635 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.330720 kubelet[2481]: E0710 00:31:09.330654 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.331230 kubelet[2481]: E0710 00:31:09.330947 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.331230 kubelet[2481]: W0710 00:31:09.330958 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.331230 kubelet[2481]: E0710 00:31:09.330969 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.331648 kubelet[2481]: E0710 00:31:09.331569 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.331648 kubelet[2481]: W0710 00:31:09.331583 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.331648 kubelet[2481]: E0710 00:31:09.331597 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.332125 kubelet[2481]: E0710 00:31:09.332022 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.332125 kubelet[2481]: W0710 00:31:09.332034 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.332125 kubelet[2481]: E0710 00:31:09.332071 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.332576 kubelet[2481]: E0710 00:31:09.332477 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.332576 kubelet[2481]: W0710 00:31:09.332489 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.332576 kubelet[2481]: E0710 00:31:09.332500 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.332868 kubelet[2481]: E0710 00:31:09.332785 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.332868 kubelet[2481]: W0710 00:31:09.332796 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.332868 kubelet[2481]: E0710 00:31:09.332805 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.333731 kubelet[2481]: E0710 00:31:09.333598 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.333731 kubelet[2481]: W0710 00:31:09.333619 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.333731 kubelet[2481]: E0710 00:31:09.333631 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.334026 kubelet[2481]: E0710 00:31:09.333935 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.334284 kubelet[2481]: W0710 00:31:09.334196 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.334284 kubelet[2481]: E0710 00:31:09.334218 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.335810 kubelet[2481]: E0710 00:31:09.335792 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.335947 kubelet[2481]: W0710 00:31:09.335875 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.335947 kubelet[2481]: E0710 00:31:09.335892 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.336327 kubelet[2481]: E0710 00:31:09.336248 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.336327 kubelet[2481]: W0710 00:31:09.336260 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.336327 kubelet[2481]: E0710 00:31:09.336271 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.336721 kubelet[2481]: E0710 00:31:09.336653 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.336721 kubelet[2481]: W0710 00:31:09.336666 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.336721 kubelet[2481]: E0710 00:31:09.336676 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.338233 kubelet[2481]: E0710 00:31:09.338144 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.338233 kubelet[2481]: W0710 00:31:09.338163 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.338233 kubelet[2481]: E0710 00:31:09.338182 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.338692 kubelet[2481]: E0710 00:31:09.338575 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.338692 kubelet[2481]: W0710 00:31:09.338587 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.338692 kubelet[2481]: E0710 00:31:09.338597 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.338862 kubelet[2481]: E0710 00:31:09.338844 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.338928 kubelet[2481]: W0710 00:31:09.338916 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.338976 kubelet[2481]: E0710 00:31:09.338967 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.339353 kubelet[2481]: E0710 00:31:09.339227 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.339353 kubelet[2481]: W0710 00:31:09.339238 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.339353 kubelet[2481]: E0710 00:31:09.339247 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.339917 kubelet[2481]: E0710 00:31:09.339900 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.340125 kubelet[2481]: W0710 00:31:09.340038 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.340125 kubelet[2481]: E0710 00:31:09.340105 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.342636 kubelet[2481]: E0710 00:31:09.342477 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.342636 kubelet[2481]: W0710 00:31:09.342498 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.342636 kubelet[2481]: E0710 00:31:09.342510 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.342902 kubelet[2481]: E0710 00:31:09.342852 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.342902 kubelet[2481]: W0710 00:31:09.342865 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.342902 kubelet[2481]: E0710 00:31:09.342890 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.343554 kubelet[2481]: E0710 00:31:09.343508 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.343554 kubelet[2481]: W0710 00:31:09.343522 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.343554 kubelet[2481]: E0710 00:31:09.343535 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.344190 kubelet[2481]: E0710 00:31:09.344166 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.344380 kubelet[2481]: W0710 00:31:09.344268 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.344380 kubelet[2481]: E0710 00:31:09.344286 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.344705 kubelet[2481]: E0710 00:31:09.344692 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.344806 kubelet[2481]: W0710 00:31:09.344765 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.344806 kubelet[2481]: E0710 00:31:09.344781 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:09.353449 containerd[1430]: time="2025-07-10T00:31:09.353401592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nnd7m,Uid:af26075f-dab6-4913-a64f-1175e5858514,Namespace:calico-system,Attempt:0,} returns sandbox id \"54acd005fc73f85f48ab0608ea2e595d7f734d1dcd0f15b500f388e5cd8fcd10\"" Jul 10 00:31:09.354845 kubelet[2481]: E0710 00:31:09.354780 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:09.354845 kubelet[2481]: W0710 00:31:09.354797 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:09.354845 kubelet[2481]: E0710 00:31:09.354814 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:10.100329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount384808033.mount: Deactivated successfully. Jul 10 00:31:10.507924 kubelet[2481]: E0710 00:31:10.507790 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8l5v7" podUID="08557a49-6fcf-4236-a001-85a4edaa7064" Jul 10 00:31:10.512432 containerd[1430]: time="2025-07-10T00:31:10.512372349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:10.514206 containerd[1430]: time="2025-07-10T00:31:10.514163771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 10 00:31:10.516546 containerd[1430]: time="2025-07-10T00:31:10.515243616Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:10.517763 containerd[1430]: time="2025-07-10T00:31:10.517718371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:10.518550 containerd[1430]: time="2025-07-10T00:31:10.518517115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.480860024s" Jul 10 00:31:10.518657 containerd[1430]: time="2025-07-10T00:31:10.518641564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 10 00:31:10.520238 containerd[1430]: time="2025-07-10T00:31:10.520204688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:31:10.540702 containerd[1430]: time="2025-07-10T00:31:10.540662423Z" level=info msg="CreateContainer within sandbox \"68c39754069b74c92a10ad36a6491143090c9cb7df886c2b14b5a95c979d628b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:31:10.549955 containerd[1430]: time="2025-07-10T00:31:10.549899953Z" level=info msg="CreateContainer within sandbox \"68c39754069b74c92a10ad36a6491143090c9cb7df886c2b14b5a95c979d628b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"12aca06b4dab3eea8b024cbcd3a7edc359501f26caa70b921d7f288b4fb4dcb2\"" Jul 10 00:31:10.553055 containerd[1430]: time="2025-07-10T00:31:10.551056964Z" level=info msg="StartContainer for \"12aca06b4dab3eea8b024cbcd3a7edc359501f26caa70b921d7f288b4fb4dcb2\"" Jul 10 00:31:10.579276 systemd[1]: Started cri-containerd-12aca06b4dab3eea8b024cbcd3a7edc359501f26caa70b921d7f288b4fb4dcb2.scope - libcontainer container 12aca06b4dab3eea8b024cbcd3a7edc359501f26caa70b921d7f288b4fb4dcb2. Jul 10 00:31:10.634039 containerd[1430]: time="2025-07-10T00:31:10.631485516Z" level=info msg="StartContainer for \"12aca06b4dab3eea8b024cbcd3a7edc359501f26caa70b921d7f288b4fb4dcb2\" returns successfully" Jul 10 00:31:11.547395 containerd[1430]: time="2025-07-10T00:31:11.547348754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:11.548771 containerd[1430]: time="2025-07-10T00:31:11.548731819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 10 00:31:11.549609 containerd[1430]: time="2025-07-10T00:31:11.549567802Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:11.551954 containerd[1430]: time="2025-07-10T00:31:11.551894098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:11.552731 containerd[1430]: time="2025-07-10T00:31:11.552697918Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.032453067s" Jul 10 00:31:11.552780 containerd[1430]: time="2025-07-10T00:31:11.552734401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 10 00:31:11.558131 containerd[1430]: time="2025-07-10T00:31:11.557308346Z" level=info msg="CreateContainer within sandbox \"54acd005fc73f85f48ab0608ea2e595d7f734d1dcd0f15b500f388e5cd8fcd10\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:31:11.570838 containerd[1430]: time="2025-07-10T00:31:11.570769203Z" level=info msg="CreateContainer within sandbox \"54acd005fc73f85f48ab0608ea2e595d7f734d1dcd0f15b500f388e5cd8fcd10\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"68ed6cb4cbdcba61584d229decc4b11eb01e605394e3cc7fdf8fbd034de80216\"" Jul 10 00:31:11.571587 containerd[1430]: time="2025-07-10T00:31:11.571462255Z" level=info msg="StartContainer for \"68ed6cb4cbdcba61584d229decc4b11eb01e605394e3cc7fdf8fbd034de80216\"" Jul 10 00:31:11.593036 kubelet[2481]: E0710 00:31:11.592357 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:11.608995 kubelet[2481]: I0710 00:31:11.608643 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58857d786b-jfccl" podStartSLOduration=2.1259052880000002 podStartE2EDuration="3.608627662s" podCreationTimestamp="2025-07-10 00:31:08 +0000 UTC" firstStartedPulling="2025-07-10 00:31:09.036938951 +0000 UTC m=+19.632020113" lastFinishedPulling="2025-07-10 00:31:10.519661325 +0000 UTC m=+21.114742487" observedRunningTime="2025-07-10 00:31:11.608574178 +0000 UTC m=+22.203655340" watchObservedRunningTime="2025-07-10 00:31:11.608627662 +0000 UTC m=+22.203708824" Jul 10 00:31:11.618296 systemd[1]: Started cri-containerd-68ed6cb4cbdcba61584d229decc4b11eb01e605394e3cc7fdf8fbd034de80216.scope - libcontainer container 68ed6cb4cbdcba61584d229decc4b11eb01e605394e3cc7fdf8fbd034de80216. Jul 10 00:31:11.622576 systemd[1]: run-containerd-runc-k8s.io-68ed6cb4cbdcba61584d229decc4b11eb01e605394e3cc7fdf8fbd034de80216-runc.Z3Vcsw.mount: Deactivated successfully. Jul 10 00:31:11.629574 kubelet[2481]: E0710 00:31:11.629532 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.629574 kubelet[2481]: W0710 00:31:11.629560 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.629574 kubelet[2481]: E0710 00:31:11.629581 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.630482 kubelet[2481]: E0710 00:31:11.629806 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.630482 kubelet[2481]: W0710 00:31:11.629817 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.630482 kubelet[2481]: E0710 00:31:11.629827 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.630482 kubelet[2481]: E0710 00:31:11.629984 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.630482 kubelet[2481]: W0710 00:31:11.629994 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.630482 kubelet[2481]: E0710 00:31:11.630003 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.630482 kubelet[2481]: E0710 00:31:11.630221 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.630482 kubelet[2481]: W0710 00:31:11.630229 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.630482 kubelet[2481]: E0710 00:31:11.630237 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.630482 kubelet[2481]: E0710 00:31:11.630456 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.630783 kubelet[2481]: W0710 00:31:11.630465 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.630783 kubelet[2481]: E0710 00:31:11.630474 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.630783 kubelet[2481]: E0710 00:31:11.630637 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.630783 kubelet[2481]: W0710 00:31:11.630645 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.630783 kubelet[2481]: E0710 00:31:11.630654 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.630900 kubelet[2481]: E0710 00:31:11.630807 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.630900 kubelet[2481]: W0710 00:31:11.630815 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.630900 kubelet[2481]: E0710 00:31:11.630822 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.632116 kubelet[2481]: E0710 00:31:11.630975 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.632116 kubelet[2481]: W0710 00:31:11.630987 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.632116 kubelet[2481]: E0710 00:31:11.630995 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.632116 kubelet[2481]: E0710 00:31:11.631184 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.632116 kubelet[2481]: W0710 00:31:11.631191 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.632116 kubelet[2481]: E0710 00:31:11.631198 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.632116 kubelet[2481]: E0710 00:31:11.631326 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.632116 kubelet[2481]: W0710 00:31:11.631332 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.632116 kubelet[2481]: E0710 00:31:11.631338 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.632116 kubelet[2481]: E0710 00:31:11.631487 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.632422 kubelet[2481]: W0710 00:31:11.631495 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.632422 kubelet[2481]: E0710 00:31:11.631505 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.632422 kubelet[2481]: E0710 00:31:11.631646 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.632422 kubelet[2481]: W0710 00:31:11.631653 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.632422 kubelet[2481]: E0710 00:31:11.631662 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.632422 kubelet[2481]: E0710 00:31:11.631818 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.632422 kubelet[2481]: W0710 00:31:11.631825 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.632422 kubelet[2481]: E0710 00:31:11.631833 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.632422 kubelet[2481]: E0710 00:31:11.631970 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.632422 kubelet[2481]: W0710 00:31:11.631977 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.632643 kubelet[2481]: E0710 00:31:11.631983 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.632643 kubelet[2481]: E0710 00:31:11.632146 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.632643 kubelet[2481]: W0710 00:31:11.632152 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.632643 kubelet[2481]: E0710 00:31:11.632159 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.650052 kubelet[2481]: E0710 00:31:11.650013 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.650052 kubelet[2481]: W0710 00:31:11.650039 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.650182 kubelet[2481]: E0710 00:31:11.650070 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.650282 kubelet[2481]: E0710 00:31:11.650267 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.650336 kubelet[2481]: W0710 00:31:11.650293 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.650336 kubelet[2481]: E0710 00:31:11.650303 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.650611 kubelet[2481]: E0710 00:31:11.650594 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.650611 kubelet[2481]: W0710 00:31:11.650608 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.650686 kubelet[2481]: E0710 00:31:11.650618 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.650914 kubelet[2481]: E0710 00:31:11.650897 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.650954 kubelet[2481]: W0710 00:31:11.650926 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.650954 kubelet[2481]: E0710 00:31:11.650937 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.651252 kubelet[2481]: E0710 00:31:11.651236 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.651301 kubelet[2481]: W0710 00:31:11.651253 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.651301 kubelet[2481]: E0710 00:31:11.651269 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.651488 kubelet[2481]: E0710 00:31:11.651476 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.651532 kubelet[2481]: W0710 00:31:11.651489 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.651532 kubelet[2481]: E0710 00:31:11.651499 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.651727 kubelet[2481]: E0710 00:31:11.651715 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.651727 kubelet[2481]: W0710 00:31:11.651727 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.651788 kubelet[2481]: E0710 00:31:11.651742 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.651940 kubelet[2481]: E0710 00:31:11.651927 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.651983 kubelet[2481]: W0710 00:31:11.651940 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.651983 kubelet[2481]: E0710 00:31:11.651950 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.652267 kubelet[2481]: E0710 00:31:11.652247 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.652321 kubelet[2481]: W0710 00:31:11.652276 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.652321 kubelet[2481]: E0710 00:31:11.652287 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.652555 containerd[1430]: time="2025-07-10T00:31:11.652435690Z" level=info msg="StartContainer for \"68ed6cb4cbdcba61584d229decc4b11eb01e605394e3cc7fdf8fbd034de80216\" returns successfully" Jul 10 00:31:11.654705 kubelet[2481]: E0710 00:31:11.654606 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.654705 kubelet[2481]: W0710 00:31:11.654633 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.654705 kubelet[2481]: E0710 00:31:11.654647 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.655019 kubelet[2481]: E0710 00:31:11.654927 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.655019 kubelet[2481]: W0710 00:31:11.654939 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.655019 kubelet[2481]: E0710 00:31:11.654949 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.655164 kubelet[2481]: E0710 00:31:11.655149 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.655164 kubelet[2481]: W0710 00:31:11.655158 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.655244 kubelet[2481]: E0710 00:31:11.655168 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.655482 kubelet[2481]: E0710 00:31:11.655466 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.655482 kubelet[2481]: W0710 00:31:11.655478 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.655726 kubelet[2481]: E0710 00:31:11.655489 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.657389 kubelet[2481]: E0710 00:31:11.657366 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.657389 kubelet[2481]: W0710 00:31:11.657385 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.657389 kubelet[2481]: E0710 00:31:11.657402 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.657784 kubelet[2481]: E0710 00:31:11.657770 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.657784 kubelet[2481]: W0710 00:31:11.657781 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.657874 kubelet[2481]: E0710 00:31:11.657794 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.657976 kubelet[2481]: E0710 00:31:11.657965 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.658003 kubelet[2481]: W0710 00:31:11.657976 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.658003 kubelet[2481]: E0710 00:31:11.657984 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.658207 kubelet[2481]: E0710 00:31:11.658196 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.658261 kubelet[2481]: W0710 00:31:11.658209 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.658261 kubelet[2481]: E0710 00:31:11.658218 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.658586 kubelet[2481]: E0710 00:31:11.658572 2481 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:31:11.658586 kubelet[2481]: W0710 00:31:11.658584 2481 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:31:11.658665 kubelet[2481]: E0710 00:31:11.658594 2481 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:31:11.707343 systemd[1]: cri-containerd-68ed6cb4cbdcba61584d229decc4b11eb01e605394e3cc7fdf8fbd034de80216.scope: Deactivated successfully. Jul 10 00:31:11.745709 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68ed6cb4cbdcba61584d229decc4b11eb01e605394e3cc7fdf8fbd034de80216-rootfs.mount: Deactivated successfully. Jul 10 00:31:11.752844 containerd[1430]: time="2025-07-10T00:31:11.748609352Z" level=info msg="shim disconnected" id=68ed6cb4cbdcba61584d229decc4b11eb01e605394e3cc7fdf8fbd034de80216 namespace=k8s.io Jul 10 00:31:11.752844 containerd[1430]: time="2025-07-10T00:31:11.752843872Z" level=warning msg="cleaning up after shim disconnected" id=68ed6cb4cbdcba61584d229decc4b11eb01e605394e3cc7fdf8fbd034de80216 namespace=k8s.io Jul 10 00:31:11.753020 containerd[1430]: time="2025-07-10T00:31:11.752864433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:31:12.507340 kubelet[2481]: E0710 00:31:12.507292 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8l5v7" podUID="08557a49-6fcf-4236-a001-85a4edaa7064" Jul 10 00:31:12.595580 kubelet[2481]: I0710 00:31:12.595546 2481 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:31:12.596757 kubelet[2481]: E0710 00:31:12.596527 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:12.597780 containerd[1430]: time="2025-07-10T00:31:12.597751534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:31:14.507669 kubelet[2481]: E0710 00:31:14.507616 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8l5v7" podUID="08557a49-6fcf-4236-a001-85a4edaa7064" Jul 10 00:31:14.858396 containerd[1430]: time="2025-07-10T00:31:14.858340326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:14.858843 containerd[1430]: time="2025-07-10T00:31:14.858811477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 10 00:31:14.859526 containerd[1430]: time="2025-07-10T00:31:14.859506563Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:14.861529 containerd[1430]: time="2025-07-10T00:31:14.861498056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:14.862334 containerd[1430]: time="2025-07-10T00:31:14.862308549Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.264519293s" Jul 10 00:31:14.862390 containerd[1430]: time="2025-07-10T00:31:14.862339951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 10 00:31:14.865889 containerd[1430]: time="2025-07-10T00:31:14.865844384Z" level=info msg="CreateContainer within sandbox \"54acd005fc73f85f48ab0608ea2e595d7f734d1dcd0f15b500f388e5cd8fcd10\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:31:14.880359 containerd[1430]: time="2025-07-10T00:31:14.880310024Z" level=info msg="CreateContainer within sandbox \"54acd005fc73f85f48ab0608ea2e595d7f734d1dcd0f15b500f388e5cd8fcd10\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"20f5eafd7ea4ae8062826b428dd00e9cf8c2a7f9bda9cfea231f7fa9bbcec9aa\"" Jul 10 00:31:14.881109 containerd[1430]: time="2025-07-10T00:31:14.881077475Z" level=info msg="StartContainer for \"20f5eafd7ea4ae8062826b428dd00e9cf8c2a7f9bda9cfea231f7fa9bbcec9aa\"" Jul 10 00:31:14.906226 systemd[1]: Started cri-containerd-20f5eafd7ea4ae8062826b428dd00e9cf8c2a7f9bda9cfea231f7fa9bbcec9aa.scope - libcontainer container 20f5eafd7ea4ae8062826b428dd00e9cf8c2a7f9bda9cfea231f7fa9bbcec9aa. Jul 10 00:31:14.934089 containerd[1430]: time="2025-07-10T00:31:14.934034190Z" level=info msg="StartContainer for \"20f5eafd7ea4ae8062826b428dd00e9cf8c2a7f9bda9cfea231f7fa9bbcec9aa\" returns successfully" Jul 10 00:31:15.595097 containerd[1430]: time="2025-07-10T00:31:15.595010474Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:31:15.597530 systemd[1]: cri-containerd-20f5eafd7ea4ae8062826b428dd00e9cf8c2a7f9bda9cfea231f7fa9bbcec9aa.scope: Deactivated successfully. Jul 10 00:31:15.619060 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20f5eafd7ea4ae8062826b428dd00e9cf8c2a7f9bda9cfea231f7fa9bbcec9aa-rootfs.mount: Deactivated successfully. Jul 10 00:31:15.627412 kubelet[2481]: I0710 00:31:15.619548 2481 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 10 00:31:15.661965 containerd[1430]: time="2025-07-10T00:31:15.661886014Z" level=info msg="shim disconnected" id=20f5eafd7ea4ae8062826b428dd00e9cf8c2a7f9bda9cfea231f7fa9bbcec9aa namespace=k8s.io Jul 10 00:31:15.661965 containerd[1430]: time="2025-07-10T00:31:15.661940097Z" level=warning msg="cleaning up after shim disconnected" id=20f5eafd7ea4ae8062826b428dd00e9cf8c2a7f9bda9cfea231f7fa9bbcec9aa namespace=k8s.io Jul 10 00:31:15.661965 containerd[1430]: time="2025-07-10T00:31:15.661950538Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:31:15.687823 systemd[1]: Created slice kubepods-burstable-podc3848646_0b29_4349_9a03_f64c3a70a1ee.slice - libcontainer container kubepods-burstable-podc3848646_0b29_4349_9a03_f64c3a70a1ee.slice. Jul 10 00:31:15.690716 containerd[1430]: time="2025-07-10T00:31:15.689670024Z" level=warning msg="cleanup warnings time=\"2025-07-10T00:31:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 10 00:31:15.696589 systemd[1]: Created slice kubepods-besteffort-podcd45c6d9_27eb_494a_9d3f_a28a02a70496.slice - libcontainer container kubepods-besteffort-podcd45c6d9_27eb_494a_9d3f_a28a02a70496.slice. Jul 10 00:31:15.703924 systemd[1]: Created slice kubepods-besteffort-podfde35633_5bd3_4224_9472_f70c96f585a5.slice - libcontainer container kubepods-besteffort-podfde35633_5bd3_4224_9472_f70c96f585a5.slice. Jul 10 00:31:15.711362 systemd[1]: Created slice kubepods-besteffort-pod44306273_2b03_49e5_af4b_bfd726f65b5f.slice - libcontainer container kubepods-besteffort-pod44306273_2b03_49e5_af4b_bfd726f65b5f.slice. Jul 10 00:31:15.719597 systemd[1]: Created slice kubepods-burstable-podc30ef12d_7fea_496b_86fe_53d8caa8bd6a.slice - libcontainer container kubepods-burstable-podc30ef12d_7fea_496b_86fe_53d8caa8bd6a.slice. Jul 10 00:31:15.725577 systemd[1]: Created slice kubepods-besteffort-pod42229c1c_0bab_4152_99c9_e48ab9263892.slice - libcontainer container kubepods-besteffort-pod42229c1c_0bab_4152_99c9_e48ab9263892.slice. Jul 10 00:31:15.731767 systemd[1]: Created slice kubepods-besteffort-pod55111cac_7833_49df_8d9d_eb348d5bef6c.slice - libcontainer container kubepods-besteffort-pod55111cac_7833_49df_8d9d_eb348d5bef6c.slice. Jul 10 00:31:15.780496 kubelet[2481]: I0710 00:31:15.780445 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plhvf\" (UniqueName: \"kubernetes.io/projected/cd45c6d9-27eb-494a-9d3f-a28a02a70496-kube-api-access-plhvf\") pod \"goldmane-768f4c5c69-qztlf\" (UID: \"cd45c6d9-27eb-494a-9d3f-a28a02a70496\") " pod="calico-system/goldmane-768f4c5c69-qztlf" Jul 10 00:31:15.780728 kubelet[2481]: I0710 00:31:15.780715 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qw5f\" (UniqueName: \"kubernetes.io/projected/c3848646-0b29-4349-9a03-f64c3a70a1ee-kube-api-access-9qw5f\") pod \"coredns-674b8bbfcf-4wp2k\" (UID: \"c3848646-0b29-4349-9a03-f64c3a70a1ee\") " pod="kube-system/coredns-674b8bbfcf-4wp2k" Jul 10 00:31:15.780848 kubelet[2481]: I0710 00:31:15.780834 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c30ef12d-7fea-496b-86fe-53d8caa8bd6a-config-volume\") pod \"coredns-674b8bbfcf-lhmtn\" (UID: \"c30ef12d-7fea-496b-86fe-53d8caa8bd6a\") " pod="kube-system/coredns-674b8bbfcf-lhmtn" Jul 10 00:31:15.780955 kubelet[2481]: I0710 00:31:15.780943 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7m2q\" (UniqueName: \"kubernetes.io/projected/c30ef12d-7fea-496b-86fe-53d8caa8bd6a-kube-api-access-n7m2q\") pod \"coredns-674b8bbfcf-lhmtn\" (UID: \"c30ef12d-7fea-496b-86fe-53d8caa8bd6a\") " pod="kube-system/coredns-674b8bbfcf-lhmtn" Jul 10 00:31:15.781106 kubelet[2481]: I0710 00:31:15.781092 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3848646-0b29-4349-9a03-f64c3a70a1ee-config-volume\") pod \"coredns-674b8bbfcf-4wp2k\" (UID: \"c3848646-0b29-4349-9a03-f64c3a70a1ee\") " pod="kube-system/coredns-674b8bbfcf-4wp2k" Jul 10 00:31:15.781232 kubelet[2481]: I0710 00:31:15.781217 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd45c6d9-27eb-494a-9d3f-a28a02a70496-config\") pod \"goldmane-768f4c5c69-qztlf\" (UID: \"cd45c6d9-27eb-494a-9d3f-a28a02a70496\") " pod="calico-system/goldmane-768f4c5c69-qztlf" Jul 10 00:31:15.781351 kubelet[2481]: I0710 00:31:15.781323 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fde35633-5bd3-4224-9472-f70c96f585a5-calico-apiserver-certs\") pod \"calico-apiserver-c98486c8f-n2v22\" (UID: \"fde35633-5bd3-4224-9472-f70c96f585a5\") " pod="calico-apiserver/calico-apiserver-c98486c8f-n2v22" Jul 10 00:31:15.781441 kubelet[2481]: I0710 00:31:15.781419 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44306273-2b03-49e5-af4b-bfd726f65b5f-tigera-ca-bundle\") pod \"calico-kube-controllers-cdc7dc74d-2zddx\" (UID: \"44306273-2b03-49e5-af4b-bfd726f65b5f\") " pod="calico-system/calico-kube-controllers-cdc7dc74d-2zddx" Jul 10 00:31:15.781516 kubelet[2481]: I0710 00:31:15.781466 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxsjm\" (UniqueName: \"kubernetes.io/projected/44306273-2b03-49e5-af4b-bfd726f65b5f-kube-api-access-bxsjm\") pod \"calico-kube-controllers-cdc7dc74d-2zddx\" (UID: \"44306273-2b03-49e5-af4b-bfd726f65b5f\") " pod="calico-system/calico-kube-controllers-cdc7dc74d-2zddx" Jul 10 00:31:15.781551 kubelet[2481]: I0710 00:31:15.781513 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/42229c1c-0bab-4152-99c9-e48ab9263892-calico-apiserver-certs\") pod \"calico-apiserver-c98486c8f-5xk6f\" (UID: \"42229c1c-0bab-4152-99c9-e48ab9263892\") " pod="calico-apiserver/calico-apiserver-c98486c8f-5xk6f" Jul 10 00:31:15.781551 kubelet[2481]: I0710 00:31:15.781538 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btksf\" (UniqueName: \"kubernetes.io/projected/fde35633-5bd3-4224-9472-f70c96f585a5-kube-api-access-btksf\") pod \"calico-apiserver-c98486c8f-n2v22\" (UID: \"fde35633-5bd3-4224-9472-f70c96f585a5\") " pod="calico-apiserver/calico-apiserver-c98486c8f-n2v22" Jul 10 00:31:15.781612 kubelet[2481]: I0710 00:31:15.781554 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55111cac-7833-49df-8d9d-eb348d5bef6c-whisker-backend-key-pair\") pod \"whisker-848d798b6-7lz4s\" (UID: \"55111cac-7833-49df-8d9d-eb348d5bef6c\") " pod="calico-system/whisker-848d798b6-7lz4s" Jul 10 00:31:15.781612 kubelet[2481]: I0710 00:31:15.781573 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55111cac-7833-49df-8d9d-eb348d5bef6c-whisker-ca-bundle\") pod \"whisker-848d798b6-7lz4s\" (UID: \"55111cac-7833-49df-8d9d-eb348d5bef6c\") " pod="calico-system/whisker-848d798b6-7lz4s" Jul 10 00:31:15.781612 kubelet[2481]: I0710 00:31:15.781587 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw8km\" (UniqueName: \"kubernetes.io/projected/55111cac-7833-49df-8d9d-eb348d5bef6c-kube-api-access-pw8km\") pod \"whisker-848d798b6-7lz4s\" (UID: \"55111cac-7833-49df-8d9d-eb348d5bef6c\") " pod="calico-system/whisker-848d798b6-7lz4s" Jul 10 00:31:15.781612 kubelet[2481]: I0710 00:31:15.781604 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd45c6d9-27eb-494a-9d3f-a28a02a70496-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-qztlf\" (UID: \"cd45c6d9-27eb-494a-9d3f-a28a02a70496\") " pod="calico-system/goldmane-768f4c5c69-qztlf" Jul 10 00:31:15.781721 kubelet[2481]: I0710 00:31:15.781625 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cd45c6d9-27eb-494a-9d3f-a28a02a70496-goldmane-key-pair\") pod \"goldmane-768f4c5c69-qztlf\" (UID: \"cd45c6d9-27eb-494a-9d3f-a28a02a70496\") " pod="calico-system/goldmane-768f4c5c69-qztlf" Jul 10 00:31:15.781721 kubelet[2481]: I0710 00:31:15.781647 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpjj\" (UniqueName: \"kubernetes.io/projected/42229c1c-0bab-4152-99c9-e48ab9263892-kube-api-access-2fpjj\") pod \"calico-apiserver-c98486c8f-5xk6f\" (UID: \"42229c1c-0bab-4152-99c9-e48ab9263892\") " pod="calico-apiserver/calico-apiserver-c98486c8f-5xk6f" Jul 10 00:31:15.994547 kubelet[2481]: E0710 00:31:15.994425 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:15.995198 containerd[1430]: time="2025-07-10T00:31:15.995120881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4wp2k,Uid:c3848646-0b29-4349-9a03-f64c3a70a1ee,Namespace:kube-system,Attempt:0,}" Jul 10 00:31:16.002572 containerd[1430]: time="2025-07-10T00:31:16.002367539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-qztlf,Uid:cd45c6d9-27eb-494a-9d3f-a28a02a70496,Namespace:calico-system,Attempt:0,}" Jul 10 00:31:16.006948 containerd[1430]: time="2025-07-10T00:31:16.006911178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c98486c8f-n2v22,Uid:fde35633-5bd3-4224-9472-f70c96f585a5,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:31:16.014967 containerd[1430]: time="2025-07-10T00:31:16.014649731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cdc7dc74d-2zddx,Uid:44306273-2b03-49e5-af4b-bfd726f65b5f,Namespace:calico-system,Attempt:0,}" Jul 10 00:31:16.022583 kubelet[2481]: E0710 00:31:16.022553 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:16.024476 containerd[1430]: time="2025-07-10T00:31:16.024315483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lhmtn,Uid:c30ef12d-7fea-496b-86fe-53d8caa8bd6a,Namespace:kube-system,Attempt:0,}" Jul 10 00:31:16.049240 containerd[1430]: time="2025-07-10T00:31:16.047338211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c98486c8f-5xk6f,Uid:42229c1c-0bab-4152-99c9-e48ab9263892,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:31:16.063163 containerd[1430]: time="2025-07-10T00:31:16.054405044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-848d798b6-7lz4s,Uid:55111cac-7833-49df-8d9d-eb348d5bef6c,Namespace:calico-system,Attempt:0,}" Jul 10 00:31:16.440788 containerd[1430]: time="2025-07-10T00:31:16.440730405Z" level=error msg="Failed to destroy network for sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.441238 containerd[1430]: time="2025-07-10T00:31:16.441137310Z" level=error msg="Failed to destroy network for sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.443523 containerd[1430]: time="2025-07-10T00:31:16.443233198Z" level=error msg="encountered an error cleaning up failed sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.443523 containerd[1430]: time="2025-07-10T00:31:16.443305282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c98486c8f-5xk6f,Uid:42229c1c-0bab-4152-99c9-e48ab9263892,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.443523 containerd[1430]: time="2025-07-10T00:31:16.443252079Z" level=error msg="encountered an error cleaning up failed sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.443523 containerd[1430]: time="2025-07-10T00:31:16.443393568Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cdc7dc74d-2zddx,Uid:44306273-2b03-49e5-af4b-bfd726f65b5f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.445620 containerd[1430]: time="2025-07-10T00:31:16.445339607Z" level=error msg="Failed to destroy network for sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.446227 containerd[1430]: time="2025-07-10T00:31:16.445390330Z" level=error msg="Failed to destroy network for sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.446227 containerd[1430]: time="2025-07-10T00:31:16.446034569Z" level=error msg="encountered an error cleaning up failed sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.446227 containerd[1430]: time="2025-07-10T00:31:16.446134735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c98486c8f-n2v22,Uid:fde35633-5bd3-4224-9472-f70c96f585a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.446227 containerd[1430]: time="2025-07-10T00:31:16.446163297Z" level=error msg="encountered an error cleaning up failed sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.446415 containerd[1430]: time="2025-07-10T00:31:16.446232701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-qztlf,Uid:cd45c6d9-27eb-494a-9d3f-a28a02a70496,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.447007 kubelet[2481]: E0710 00:31:16.446587 2481 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.447007 kubelet[2481]: E0710 00:31:16.446647 2481 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c98486c8f-n2v22" Jul 10 00:31:16.447007 kubelet[2481]: E0710 00:31:16.446679 2481 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c98486c8f-n2v22" Jul 10 00:31:16.447211 kubelet[2481]: E0710 00:31:16.446726 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c98486c8f-n2v22_calico-apiserver(fde35633-5bd3-4224-9472-f70c96f585a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c98486c8f-n2v22_calico-apiserver(fde35633-5bd3-4224-9472-f70c96f585a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c98486c8f-n2v22" podUID="fde35633-5bd3-4224-9472-f70c96f585a5" Jul 10 00:31:16.447211 kubelet[2481]: E0710 00:31:16.446779 2481 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.447211 kubelet[2481]: E0710 00:31:16.446838 2481 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c98486c8f-5xk6f" Jul 10 00:31:16.447313 kubelet[2481]: E0710 00:31:16.446867 2481 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c98486c8f-5xk6f" Jul 10 00:31:16.447313 kubelet[2481]: E0710 00:31:16.446904 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c98486c8f-5xk6f_calico-apiserver(42229c1c-0bab-4152-99c9-e48ab9263892)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c98486c8f-5xk6f_calico-apiserver(42229c1c-0bab-4152-99c9-e48ab9263892)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c98486c8f-5xk6f" podUID="42229c1c-0bab-4152-99c9-e48ab9263892" Jul 10 00:31:16.447313 kubelet[2481]: E0710 00:31:16.446949 2481 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.447393 kubelet[2481]: E0710 00:31:16.446968 2481 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-qztlf" Jul 10 00:31:16.447393 kubelet[2481]: E0710 00:31:16.447018 2481 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-qztlf" Jul 10 00:31:16.447393 kubelet[2481]: E0710 00:31:16.447076 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-qztlf_calico-system(cd45c6d9-27eb-494a-9d3f-a28a02a70496)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-qztlf_calico-system(cd45c6d9-27eb-494a-9d3f-a28a02a70496)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-qztlf" podUID="cd45c6d9-27eb-494a-9d3f-a28a02a70496" Jul 10 00:31:16.448295 kubelet[2481]: E0710 00:31:16.447977 2481 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.448295 kubelet[2481]: E0710 00:31:16.448053 2481 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cdc7dc74d-2zddx" Jul 10 00:31:16.448295 kubelet[2481]: E0710 00:31:16.448073 2481 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cdc7dc74d-2zddx" Jul 10 00:31:16.448447 kubelet[2481]: E0710 00:31:16.448112 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cdc7dc74d-2zddx_calico-system(44306273-2b03-49e5-af4b-bfd726f65b5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cdc7dc74d-2zddx_calico-system(44306273-2b03-49e5-af4b-bfd726f65b5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cdc7dc74d-2zddx" podUID="44306273-2b03-49e5-af4b-bfd726f65b5f" Jul 10 00:31:16.450232 containerd[1430]: time="2025-07-10T00:31:16.450178783Z" level=error msg="Failed to destroy network for sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.450232 containerd[1430]: time="2025-07-10T00:31:16.450178903Z" level=error msg="Failed to destroy network for sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.450232 containerd[1430]: time="2025-07-10T00:31:16.450555846Z" level=error msg="encountered an error cleaning up failed sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.450232 containerd[1430]: time="2025-07-10T00:31:16.450577047Z" level=error msg="encountered an error cleaning up failed sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.450232 containerd[1430]: time="2025-07-10T00:31:16.450600169Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-848d798b6-7lz4s,Uid:55111cac-7833-49df-8d9d-eb348d5bef6c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.451145 kubelet[2481]: E0710 00:31:16.451063 2481 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.452011 kubelet[2481]: E0710 00:31:16.451778 2481 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-848d798b6-7lz4s" Jul 10 00:31:16.452011 kubelet[2481]: E0710 00:31:16.451816 2481 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-848d798b6-7lz4s" Jul 10 00:31:16.452011 kubelet[2481]: E0710 00:31:16.451863 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-848d798b6-7lz4s_calico-system(55111cac-7833-49df-8d9d-eb348d5bef6c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-848d798b6-7lz4s_calico-system(55111cac-7833-49df-8d9d-eb348d5bef6c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-848d798b6-7lz4s" podUID="55111cac-7833-49df-8d9d-eb348d5bef6c" Jul 10 00:31:16.454442 containerd[1430]: time="2025-07-10T00:31:16.450626850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lhmtn,Uid:c30ef12d-7fea-496b-86fe-53d8caa8bd6a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.454589 kubelet[2481]: E0710 00:31:16.454544 2481 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.454634 kubelet[2481]: E0710 00:31:16.454593 2481 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lhmtn" Jul 10 00:31:16.454634 kubelet[2481]: E0710 00:31:16.454614 2481 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lhmtn" Jul 10 00:31:16.454701 kubelet[2481]: E0710 00:31:16.454675 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lhmtn_kube-system(c30ef12d-7fea-496b-86fe-53d8caa8bd6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lhmtn_kube-system(c30ef12d-7fea-496b-86fe-53d8caa8bd6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lhmtn" podUID="c30ef12d-7fea-496b-86fe-53d8caa8bd6a" Jul 10 00:31:16.455534 containerd[1430]: time="2025-07-10T00:31:16.455494908Z" level=error msg="Failed to destroy network for sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.456047 containerd[1430]: time="2025-07-10T00:31:16.456004659Z" level=error msg="encountered an error cleaning up failed sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.456131 containerd[1430]: time="2025-07-10T00:31:16.456107226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4wp2k,Uid:c3848646-0b29-4349-9a03-f64c3a70a1ee,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.456334 kubelet[2481]: E0710 00:31:16.456301 2481 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.456389 kubelet[2481]: E0710 00:31:16.456371 2481 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4wp2k" Jul 10 00:31:16.456417 kubelet[2481]: E0710 00:31:16.456394 2481 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4wp2k" Jul 10 00:31:16.456462 kubelet[2481]: E0710 00:31:16.456439 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4wp2k_kube-system(c3848646-0b29-4349-9a03-f64c3a70a1ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4wp2k_kube-system(c3848646-0b29-4349-9a03-f64c3a70a1ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4wp2k" podUID="c3848646-0b29-4349-9a03-f64c3a70a1ee" Jul 10 00:31:16.512537 systemd[1]: Created slice kubepods-besteffort-pod08557a49_6fcf_4236_a001_85a4edaa7064.slice - libcontainer container kubepods-besteffort-pod08557a49_6fcf_4236_a001_85a4edaa7064.slice. Jul 10 00:31:16.514984 containerd[1430]: time="2025-07-10T00:31:16.514945506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8l5v7,Uid:08557a49-6fcf-4236-a001-85a4edaa7064,Namespace:calico-system,Attempt:0,}" Jul 10 00:31:16.577128 containerd[1430]: time="2025-07-10T00:31:16.577083509Z" level=error msg="Failed to destroy network for sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.577727 containerd[1430]: time="2025-07-10T00:31:16.577567578Z" level=error msg="encountered an error cleaning up failed sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.577727 containerd[1430]: time="2025-07-10T00:31:16.577617261Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8l5v7,Uid:08557a49-6fcf-4236-a001-85a4edaa7064,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.577896 kubelet[2481]: E0710 00:31:16.577846 2481 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.577950 kubelet[2481]: E0710 00:31:16.577906 2481 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8l5v7" Jul 10 00:31:16.577950 kubelet[2481]: E0710 00:31:16.577927 2481 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8l5v7" Jul 10 00:31:16.578035 kubelet[2481]: E0710 00:31:16.577982 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8l5v7_calico-system(08557a49-6fcf-4236-a001-85a4edaa7064)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8l5v7_calico-system(08557a49-6fcf-4236-a001-85a4edaa7064)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8l5v7" podUID="08557a49-6fcf-4236-a001-85a4edaa7064" Jul 10 00:31:16.606703 kubelet[2481]: I0710 00:31:16.606670 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:16.608018 containerd[1430]: time="2025-07-10T00:31:16.607787868Z" level=info msg="StopPodSandbox for \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\"" Jul 10 00:31:16.608018 containerd[1430]: time="2025-07-10T00:31:16.607985320Z" level=info msg="Ensure that sandbox 86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8 in task-service has been cleanup successfully" Jul 10 00:31:16.608769 kubelet[2481]: I0710 00:31:16.608743 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:16.610395 containerd[1430]: time="2025-07-10T00:31:16.610210696Z" level=info msg="StopPodSandbox for \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\"" Jul 10 00:31:16.610477 containerd[1430]: time="2025-07-10T00:31:16.610381306Z" level=info msg="Ensure that sandbox 21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5 in task-service has been cleanup successfully" Jul 10 00:31:16.610827 kubelet[2481]: I0710 00:31:16.610808 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:16.611715 containerd[1430]: time="2025-07-10T00:31:16.611358206Z" level=info msg="StopPodSandbox for \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\"" Jul 10 00:31:16.611715 containerd[1430]: time="2025-07-10T00:31:16.611509615Z" level=info msg="Ensure that sandbox 4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba in task-service has been cleanup successfully" Jul 10 00:31:16.614096 kubelet[2481]: I0710 00:31:16.614065 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:16.615458 containerd[1430]: time="2025-07-10T00:31:16.615421975Z" level=info msg="StopPodSandbox for \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\"" Jul 10 00:31:16.615558 kubelet[2481]: I0710 00:31:16.615530 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:16.615697 containerd[1430]: time="2025-07-10T00:31:16.615645069Z" level=info msg="Ensure that sandbox 44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a in task-service has been cleanup successfully" Jul 10 00:31:16.616430 containerd[1430]: time="2025-07-10T00:31:16.616039813Z" level=info msg="StopPodSandbox for \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\"" Jul 10 00:31:16.616430 containerd[1430]: time="2025-07-10T00:31:16.616242705Z" level=info msg="Ensure that sandbox 259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6 in task-service has been cleanup successfully" Jul 10 00:31:16.621431 kubelet[2481]: I0710 00:31:16.621393 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:16.624827 containerd[1430]: time="2025-07-10T00:31:16.624784868Z" level=info msg="StopPodSandbox for \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\"" Jul 10 00:31:16.626931 containerd[1430]: time="2025-07-10T00:31:16.626888597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:31:16.627012 kubelet[2481]: I0710 00:31:16.626896 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:16.628659 containerd[1430]: time="2025-07-10T00:31:16.628298003Z" level=info msg="StopPodSandbox for \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\"" Jul 10 00:31:16.628659 containerd[1430]: time="2025-07-10T00:31:16.628457253Z" level=info msg="Ensure that sandbox 81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b in task-service has been cleanup successfully" Jul 10 00:31:16.630099 containerd[1430]: time="2025-07-10T00:31:16.630069871Z" level=info msg="Ensure that sandbox 92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9 in task-service has been cleanup successfully" Jul 10 00:31:16.635271 kubelet[2481]: I0710 00:31:16.635233 2481 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:16.635917 containerd[1430]: time="2025-07-10T00:31:16.635788021Z" level=info msg="StopPodSandbox for \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\"" Jul 10 00:31:16.635973 containerd[1430]: time="2025-07-10T00:31:16.635951831Z" level=info msg="Ensure that sandbox 5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0 in task-service has been cleanup successfully" Jul 10 00:31:16.663985 containerd[1430]: time="2025-07-10T00:31:16.663919303Z" level=error msg="StopPodSandbox for \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\" failed" error="failed to destroy network for sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.664213 kubelet[2481]: E0710 00:31:16.664175 2481 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:16.668691 kubelet[2481]: E0710 00:31:16.668533 2481 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8"} Jul 10 00:31:16.668691 kubelet[2481]: E0710 00:31:16.668619 2481 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"44306273-2b03-49e5-af4b-bfd726f65b5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:31:16.668691 kubelet[2481]: E0710 00:31:16.668643 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"44306273-2b03-49e5-af4b-bfd726f65b5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cdc7dc74d-2zddx" podUID="44306273-2b03-49e5-af4b-bfd726f65b5f" Jul 10 00:31:16.670271 containerd[1430]: time="2025-07-10T00:31:16.670230729Z" level=error msg="StopPodSandbox for \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\" failed" error="failed to destroy network for sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.670821 kubelet[2481]: E0710 00:31:16.670782 2481 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:16.670900 kubelet[2481]: E0710 00:31:16.670834 2481 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba"} Jul 10 00:31:16.670900 kubelet[2481]: E0710 00:31:16.670860 2481 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3848646-0b29-4349-9a03-f64c3a70a1ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:31:16.670900 kubelet[2481]: E0710 00:31:16.670879 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3848646-0b29-4349-9a03-f64c3a70a1ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4wp2k" podUID="c3848646-0b29-4349-9a03-f64c3a70a1ee" Jul 10 00:31:16.681226 containerd[1430]: time="2025-07-10T00:31:16.681184399Z" level=error msg="StopPodSandbox for \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\" failed" error="failed to destroy network for sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.681554 kubelet[2481]: E0710 00:31:16.681501 2481 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:16.681554 kubelet[2481]: E0710 00:31:16.681544 2481 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9"} Jul 10 00:31:16.681755 kubelet[2481]: E0710 00:31:16.681574 2481 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd45c6d9-27eb-494a-9d3f-a28a02a70496\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:31:16.681755 kubelet[2481]: E0710 00:31:16.681600 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd45c6d9-27eb-494a-9d3f-a28a02a70496\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-qztlf" podUID="cd45c6d9-27eb-494a-9d3f-a28a02a70496" Jul 10 00:31:16.681866 containerd[1430]: time="2025-07-10T00:31:16.681711711Z" level=error msg="StopPodSandbox for \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\" failed" error="failed to destroy network for sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.681921 kubelet[2481]: E0710 00:31:16.681855 2481 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:16.681954 kubelet[2481]: E0710 00:31:16.681888 2481 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a"} Jul 10 00:31:16.682082 kubelet[2481]: E0710 00:31:16.681956 2481 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55111cac-7833-49df-8d9d-eb348d5bef6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:31:16.682082 kubelet[2481]: E0710 00:31:16.681975 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55111cac-7833-49df-8d9d-eb348d5bef6c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-848d798b6-7lz4s" podUID="55111cac-7833-49df-8d9d-eb348d5bef6c" Jul 10 00:31:16.687212 containerd[1430]: time="2025-07-10T00:31:16.687167365Z" level=error msg="StopPodSandbox for \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\" failed" error="failed to destroy network for sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.687394 kubelet[2481]: E0710 00:31:16.687358 2481 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:16.687537 kubelet[2481]: E0710 00:31:16.687403 2481 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5"} Jul 10 00:31:16.687537 kubelet[2481]: E0710 00:31:16.687443 2481 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fde35633-5bd3-4224-9472-f70c96f585a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:31:16.687537 kubelet[2481]: E0710 00:31:16.687462 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fde35633-5bd3-4224-9472-f70c96f585a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c98486c8f-n2v22" podUID="fde35633-5bd3-4224-9472-f70c96f585a5" Jul 10 00:31:16.694582 containerd[1430]: time="2025-07-10T00:31:16.694415169Z" level=error msg="StopPodSandbox for \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\" failed" error="failed to destroy network for sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.695175 kubelet[2481]: E0710 00:31:16.695131 2481 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:16.695249 kubelet[2481]: E0710 00:31:16.695189 2481 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b"} Jul 10 00:31:16.695280 kubelet[2481]: E0710 00:31:16.695246 2481 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08557a49-6fcf-4236-a001-85a4edaa7064\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:31:16.695280 kubelet[2481]: E0710 00:31:16.695271 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08557a49-6fcf-4236-a001-85a4edaa7064\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8l5v7" podUID="08557a49-6fcf-4236-a001-85a4edaa7064" Jul 10 00:31:16.697160 containerd[1430]: time="2025-07-10T00:31:16.696995447Z" level=error msg="StopPodSandbox for \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\" failed" error="failed to destroy network for sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.697277 kubelet[2481]: E0710 00:31:16.697234 2481 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:16.697277 kubelet[2481]: E0710 00:31:16.697270 2481 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6"} Jul 10 00:31:16.697369 kubelet[2481]: E0710 00:31:16.697321 2481 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42229c1c-0bab-4152-99c9-e48ab9263892\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:31:16.697369 kubelet[2481]: E0710 00:31:16.697342 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42229c1c-0bab-4152-99c9-e48ab9263892\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c98486c8f-5xk6f" podUID="42229c1c-0bab-4152-99c9-e48ab9263892" Jul 10 00:31:16.699655 containerd[1430]: time="2025-07-10T00:31:16.699603486Z" level=error msg="StopPodSandbox for \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\" failed" error="failed to destroy network for sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:31:16.700191 kubelet[2481]: E0710 00:31:16.700158 2481 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:16.700259 kubelet[2481]: E0710 00:31:16.700200 2481 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0"} Jul 10 00:31:16.700259 kubelet[2481]: E0710 00:31:16.700226 2481 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c30ef12d-7fea-496b-86fe-53d8caa8bd6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:31:16.700259 kubelet[2481]: E0710 00:31:16.700245 2481 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c30ef12d-7fea-496b-86fe-53d8caa8bd6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lhmtn" podUID="c30ef12d-7fea-496b-86fe-53d8caa8bd6a" Jul 10 00:31:20.015776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3598707128.mount: Deactivated successfully. Jul 10 00:31:20.301799 containerd[1430]: time="2025-07-10T00:31:20.301674579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 10 00:31:20.311236 containerd[1430]: time="2025-07-10T00:31:20.310390038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.68346072s" Jul 10 00:31:20.311236 containerd[1430]: time="2025-07-10T00:31:20.310436281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 10 00:31:20.321776 containerd[1430]: time="2025-07-10T00:31:20.321725835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:20.322659 containerd[1430]: time="2025-07-10T00:31:20.322628202Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:20.323084 containerd[1430]: time="2025-07-10T00:31:20.322695286Z" level=info msg="CreateContainer within sandbox \"54acd005fc73f85f48ab0608ea2e595d7f734d1dcd0f15b500f388e5cd8fcd10\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:31:20.323496 containerd[1430]: time="2025-07-10T00:31:20.323456486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:20.352412 containerd[1430]: time="2025-07-10T00:31:20.352356047Z" level=info msg="CreateContainer within sandbox \"54acd005fc73f85f48ab0608ea2e595d7f734d1dcd0f15b500f388e5cd8fcd10\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"138c876e275b3ff9f7188e1b9be70d2d9a1c12b6920245e9e719200a8a03aaaa\"" Jul 10 00:31:20.352969 containerd[1430]: time="2025-07-10T00:31:20.352942278Z" level=info msg="StartContainer for \"138c876e275b3ff9f7188e1b9be70d2d9a1c12b6920245e9e719200a8a03aaaa\"" Jul 10 00:31:20.412241 systemd[1]: Started cri-containerd-138c876e275b3ff9f7188e1b9be70d2d9a1c12b6920245e9e719200a8a03aaaa.scope - libcontainer container 138c876e275b3ff9f7188e1b9be70d2d9a1c12b6920245e9e719200a8a03aaaa. Jul 10 00:31:20.453144 containerd[1430]: time="2025-07-10T00:31:20.449801337Z" level=info msg="StartContainer for \"138c876e275b3ff9f7188e1b9be70d2d9a1c12b6920245e9e719200a8a03aaaa\" returns successfully" Jul 10 00:31:20.669054 kubelet[2481]: I0710 00:31:20.668980 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nnd7m" podStartSLOduration=1.713045459 podStartE2EDuration="12.668963073s" podCreationTimestamp="2025-07-10 00:31:08 +0000 UTC" firstStartedPulling="2025-07-10 00:31:09.355298668 +0000 UTC m=+19.950379830" lastFinishedPulling="2025-07-10 00:31:20.311216322 +0000 UTC m=+30.906297444" observedRunningTime="2025-07-10 00:31:20.668029304 +0000 UTC m=+31.263110466" watchObservedRunningTime="2025-07-10 00:31:20.668963073 +0000 UTC m=+31.264044235" Jul 10 00:31:20.727440 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:31:20.727566 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:31:20.852952 containerd[1430]: time="2025-07-10T00:31:20.852888195Z" level=info msg="StopPodSandbox for \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\"" Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.022 [INFO][3765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.023 [INFO][3765] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" iface="eth0" netns="/var/run/netns/cni-9ac659f0-4d7b-08e2-8f8a-af8786c95433" Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.023 [INFO][3765] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" iface="eth0" netns="/var/run/netns/cni-9ac659f0-4d7b-08e2-8f8a-af8786c95433" Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.024 [INFO][3765] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" iface="eth0" netns="/var/run/netns/cni-9ac659f0-4d7b-08e2-8f8a-af8786c95433" Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.024 [INFO][3765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.024 [INFO][3765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.146 [INFO][3782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" HandleID="k8s-pod-network.44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Workload="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.146 [INFO][3782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.146 [INFO][3782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.156 [WARNING][3782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" HandleID="k8s-pod-network.44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Workload="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.156 [INFO][3782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" HandleID="k8s-pod-network.44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Workload="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.157 [INFO][3782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:21.161214 containerd[1430]: 2025-07-10 00:31:21.159 [INFO][3765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:21.161608 containerd[1430]: time="2025-07-10T00:31:21.161444986Z" level=info msg="TearDown network for sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\" successfully" Jul 10 00:31:21.161608 containerd[1430]: time="2025-07-10T00:31:21.161474308Z" level=info msg="StopPodSandbox for \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\" returns successfully" Jul 10 00:31:21.163290 systemd[1]: run-netns-cni\x2d9ac659f0\x2d4d7b\x2d08e2\x2d8f8a\x2daf8786c95433.mount: Deactivated successfully. Jul 10 00:31:21.230027 kubelet[2481]: I0710 00:31:21.229971 2481 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55111cac-7833-49df-8d9d-eb348d5bef6c-whisker-backend-key-pair\") pod \"55111cac-7833-49df-8d9d-eb348d5bef6c\" (UID: \"55111cac-7833-49df-8d9d-eb348d5bef6c\") " Jul 10 00:31:21.230027 kubelet[2481]: I0710 00:31:21.230023 2481 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw8km\" (UniqueName: \"kubernetes.io/projected/55111cac-7833-49df-8d9d-eb348d5bef6c-kube-api-access-pw8km\") pod \"55111cac-7833-49df-8d9d-eb348d5bef6c\" (UID: \"55111cac-7833-49df-8d9d-eb348d5bef6c\") " Jul 10 00:31:21.230213 kubelet[2481]: I0710 00:31:21.230066 2481 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55111cac-7833-49df-8d9d-eb348d5bef6c-whisker-ca-bundle\") pod \"55111cac-7833-49df-8d9d-eb348d5bef6c\" (UID: \"55111cac-7833-49df-8d9d-eb348d5bef6c\") " Jul 10 00:31:21.235331 kubelet[2481]: I0710 00:31:21.235087 2481 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55111cac-7833-49df-8d9d-eb348d5bef6c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "55111cac-7833-49df-8d9d-eb348d5bef6c" (UID: "55111cac-7833-49df-8d9d-eb348d5bef6c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 10 00:31:21.236547 kubelet[2481]: I0710 00:31:21.236504 2481 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55111cac-7833-49df-8d9d-eb348d5bef6c-kube-api-access-pw8km" (OuterVolumeSpecName: "kube-api-access-pw8km") pod "55111cac-7833-49df-8d9d-eb348d5bef6c" (UID: "55111cac-7833-49df-8d9d-eb348d5bef6c"). InnerVolumeSpecName "kube-api-access-pw8km". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 10 00:31:21.237038 systemd[1]: var-lib-kubelet-pods-55111cac\x2d7833\x2d49df\x2d8d9d\x2deb348d5bef6c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpw8km.mount: Deactivated successfully. Jul 10 00:31:21.237146 systemd[1]: var-lib-kubelet-pods-55111cac\x2d7833\x2d49df\x2d8d9d\x2deb348d5bef6c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:31:21.238165 kubelet[2481]: I0710 00:31:21.238137 2481 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55111cac-7833-49df-8d9d-eb348d5bef6c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "55111cac-7833-49df-8d9d-eb348d5bef6c" (UID: "55111cac-7833-49df-8d9d-eb348d5bef6c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 10 00:31:21.331157 kubelet[2481]: I0710 00:31:21.331112 2481 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55111cac-7833-49df-8d9d-eb348d5bef6c-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:21.331157 kubelet[2481]: I0710 00:31:21.331146 2481 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55111cac-7833-49df-8d9d-eb348d5bef6c-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:21.331157 kubelet[2481]: I0710 00:31:21.331156 2481 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pw8km\" (UniqueName: \"kubernetes.io/projected/55111cac-7833-49df-8d9d-eb348d5bef6c-kube-api-access-pw8km\") on node \"localhost\" DevicePath \"\"" Jul 10 00:31:21.518999 systemd[1]: Removed slice kubepods-besteffort-pod55111cac_7833_49df_8d9d_eb348d5bef6c.slice - libcontainer container kubepods-besteffort-pod55111cac_7833_49df_8d9d_eb348d5bef6c.slice. Jul 10 00:31:21.653608 kubelet[2481]: I0710 00:31:21.653576 2481 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:31:21.703059 systemd[1]: Created slice kubepods-besteffort-pod4b5769fa_dfaf_49e4_8eb3_9e9f9d5fbfc0.slice - libcontainer container kubepods-besteffort-pod4b5769fa_dfaf_49e4_8eb3_9e9f9d5fbfc0.slice. Jul 10 00:31:21.734152 kubelet[2481]: I0710 00:31:21.734111 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xgfx\" (UniqueName: \"kubernetes.io/projected/4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0-kube-api-access-6xgfx\") pod \"whisker-859b87475d-px9qr\" (UID: \"4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0\") " pod="calico-system/whisker-859b87475d-px9qr" Jul 10 00:31:21.734525 kubelet[2481]: I0710 00:31:21.734181 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0-whisker-backend-key-pair\") pod \"whisker-859b87475d-px9qr\" (UID: \"4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0\") " pod="calico-system/whisker-859b87475d-px9qr" Jul 10 00:31:21.734525 kubelet[2481]: I0710 00:31:21.734198 2481 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0-whisker-ca-bundle\") pod \"whisker-859b87475d-px9qr\" (UID: \"4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0\") " pod="calico-system/whisker-859b87475d-px9qr" Jul 10 00:31:22.006820 containerd[1430]: time="2025-07-10T00:31:22.006772100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-859b87475d-px9qr,Uid:4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0,Namespace:calico-system,Attempt:0,}" Jul 10 00:31:22.187299 systemd-networkd[1371]: cali1add8250de3: Link UP Jul 10 00:31:22.187880 systemd-networkd[1371]: cali1add8250de3: Gained carrier Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.041 [INFO][3804] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.054 [INFO][3804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--859b87475d--px9qr-eth0 whisker-859b87475d- calico-system 4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0 900 0 2025-07-10 00:31:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:859b87475d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-859b87475d-px9qr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1add8250de3 [] [] }} ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Namespace="calico-system" Pod="whisker-859b87475d-px9qr" WorkloadEndpoint="localhost-k8s-whisker--859b87475d--px9qr-" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.054 [INFO][3804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Namespace="calico-system" Pod="whisker-859b87475d-px9qr" WorkloadEndpoint="localhost-k8s-whisker--859b87475d--px9qr-eth0" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.078 [INFO][3819] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" HandleID="k8s-pod-network.ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Workload="localhost-k8s-whisker--859b87475d--px9qr-eth0" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.078 [INFO][3819] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" HandleID="k8s-pod-network.ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Workload="localhost-k8s-whisker--859b87475d--px9qr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b40e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-859b87475d-px9qr", "timestamp":"2025-07-10 00:31:22.077967557 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.078 [INFO][3819] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.078 [INFO][3819] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.078 [INFO][3819] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.089 [INFO][3819] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" host="localhost" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.098 [INFO][3819] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.105 [INFO][3819] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.116 [INFO][3819] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.120 [INFO][3819] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.120 [INFO][3819] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" host="localhost" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.129 [INFO][3819] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3 Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.133 [INFO][3819] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" host="localhost" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.144 [INFO][3819] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" host="localhost" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.144 [INFO][3819] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" host="localhost" Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.144 [INFO][3819] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:22.203509 containerd[1430]: 2025-07-10 00:31:22.144 [INFO][3819] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" HandleID="k8s-pod-network.ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Workload="localhost-k8s-whisker--859b87475d--px9qr-eth0" Jul 10 00:31:22.205403 containerd[1430]: 2025-07-10 00:31:22.147 [INFO][3804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Namespace="calico-system" Pod="whisker-859b87475d-px9qr" WorkloadEndpoint="localhost-k8s-whisker--859b87475d--px9qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--859b87475d--px9qr-eth0", GenerateName:"whisker-859b87475d-", Namespace:"calico-system", SelfLink:"", UID:"4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"859b87475d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-859b87475d-px9qr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1add8250de3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:22.205403 containerd[1430]: 2025-07-10 00:31:22.147 [INFO][3804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Namespace="calico-system" Pod="whisker-859b87475d-px9qr" WorkloadEndpoint="localhost-k8s-whisker--859b87475d--px9qr-eth0" Jul 10 00:31:22.205403 containerd[1430]: 2025-07-10 00:31:22.147 [INFO][3804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1add8250de3 ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Namespace="calico-system" Pod="whisker-859b87475d-px9qr" WorkloadEndpoint="localhost-k8s-whisker--859b87475d--px9qr-eth0" Jul 10 00:31:22.205403 containerd[1430]: 2025-07-10 00:31:22.185 [INFO][3804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Namespace="calico-system" Pod="whisker-859b87475d-px9qr" WorkloadEndpoint="localhost-k8s-whisker--859b87475d--px9qr-eth0" Jul 10 00:31:22.205403 containerd[1430]: 2025-07-10 00:31:22.185 [INFO][3804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Namespace="calico-system" Pod="whisker-859b87475d-px9qr" WorkloadEndpoint="localhost-k8s-whisker--859b87475d--px9qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--859b87475d--px9qr-eth0", GenerateName:"whisker-859b87475d-", Namespace:"calico-system", SelfLink:"", UID:"4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"859b87475d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3", Pod:"whisker-859b87475d-px9qr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1add8250de3", MAC:"5e:da:e6:89:5c:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:22.205403 containerd[1430]: 2025-07-10 00:31:22.198 [INFO][3804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3" Namespace="calico-system" Pod="whisker-859b87475d-px9qr" WorkloadEndpoint="localhost-k8s-whisker--859b87475d--px9qr-eth0" Jul 10 00:31:22.230547 containerd[1430]: time="2025-07-10T00:31:22.229853779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:22.230824 containerd[1430]: time="2025-07-10T00:31:22.230698900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:22.230824 containerd[1430]: time="2025-07-10T00:31:22.230762823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:22.231102 containerd[1430]: time="2025-07-10T00:31:22.231010635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:22.256293 systemd[1]: Started cri-containerd-ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3.scope - libcontainer container ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3. Jul 10 00:31:22.271023 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:31:22.319186 containerd[1430]: time="2025-07-10T00:31:22.319143165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-859b87475d-px9qr,Uid:4b5769fa-dfaf-49e4-8eb3-9e9f9d5fbfc0,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3\"" Jul 10 00:31:22.321423 containerd[1430]: time="2025-07-10T00:31:22.320792606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:31:23.297448 containerd[1430]: time="2025-07-10T00:31:23.296497224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:23.297448 containerd[1430]: time="2025-07-10T00:31:23.296982327Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 10 00:31:23.298406 containerd[1430]: time="2025-07-10T00:31:23.298364952Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:23.300160 containerd[1430]: time="2025-07-10T00:31:23.300113796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:23.301549 containerd[1430]: time="2025-07-10T00:31:23.301522423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 980.142428ms" Jul 10 00:31:23.301842 containerd[1430]: time="2025-07-10T00:31:23.301553024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 10 00:31:23.308436 containerd[1430]: time="2025-07-10T00:31:23.308275624Z" level=info msg="CreateContainer within sandbox \"ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:31:23.319143 containerd[1430]: time="2025-07-10T00:31:23.319096458Z" level=info msg="CreateContainer within sandbox \"ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1392ffed52477fd4a1896bd954a5c5870a7c0dc086902d673fc2c80412416f62\"" Jul 10 00:31:23.321087 containerd[1430]: time="2025-07-10T00:31:23.319822532Z" level=info msg="StartContainer for \"1392ffed52477fd4a1896bd954a5c5870a7c0dc086902d673fc2c80412416f62\"" Jul 10 00:31:23.356420 systemd[1]: Started cri-containerd-1392ffed52477fd4a1896bd954a5c5870a7c0dc086902d673fc2c80412416f62.scope - libcontainer container 1392ffed52477fd4a1896bd954a5c5870a7c0dc086902d673fc2c80412416f62. Jul 10 00:31:23.374384 systemd-networkd[1371]: cali1add8250de3: Gained IPv6LL Jul 10 00:31:23.421369 containerd[1430]: time="2025-07-10T00:31:23.421323876Z" level=info msg="StartContainer for \"1392ffed52477fd4a1896bd954a5c5870a7c0dc086902d673fc2c80412416f62\" returns successfully" Jul 10 00:31:23.424405 containerd[1430]: time="2025-07-10T00:31:23.424375581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:31:23.509688 kubelet[2481]: I0710 00:31:23.509643 2481 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55111cac-7833-49df-8d9d-eb348d5bef6c" path="/var/lib/kubelet/pods/55111cac-7833-49df-8d9d-eb348d5bef6c/volumes" Jul 10 00:31:24.845578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1134922824.mount: Deactivated successfully. Jul 10 00:31:24.869152 containerd[1430]: time="2025-07-10T00:31:24.869099267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:24.871495 containerd[1430]: time="2025-07-10T00:31:24.871457016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 10 00:31:24.874319 containerd[1430]: time="2025-07-10T00:31:24.874256304Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:24.876764 containerd[1430]: time="2025-07-10T00:31:24.876498848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:24.877578 containerd[1430]: time="2025-07-10T00:31:24.877504734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.453097351s" Jul 10 00:31:24.877578 containerd[1430]: time="2025-07-10T00:31:24.877543216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 10 00:31:24.882517 containerd[1430]: time="2025-07-10T00:31:24.882441601Z" level=info msg="CreateContainer within sandbox \"ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:31:24.894747 containerd[1430]: time="2025-07-10T00:31:24.894683685Z" level=info msg="CreateContainer within sandbox \"ffc749cd62985c38e012463420c5dbbe1547553d94522881c95ad36882ca6ca3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8f5c9d12157162a322df78b809bea5cd6fc62dae706607f7c81b76f18a8f8a26\"" Jul 10 00:31:24.895753 containerd[1430]: time="2025-07-10T00:31:24.895628128Z" level=info msg="StartContainer for \"8f5c9d12157162a322df78b809bea5cd6fc62dae706607f7c81b76f18a8f8a26\"" Jul 10 00:31:24.939231 systemd[1]: Started cri-containerd-8f5c9d12157162a322df78b809bea5cd6fc62dae706607f7c81b76f18a8f8a26.scope - libcontainer container 8f5c9d12157162a322df78b809bea5cd6fc62dae706607f7c81b76f18a8f8a26. Jul 10 00:31:24.971735 containerd[1430]: time="2025-07-10T00:31:24.971693950Z" level=info msg="StartContainer for \"8f5c9d12157162a322df78b809bea5cd6fc62dae706607f7c81b76f18a8f8a26\" returns successfully" Jul 10 00:31:25.546861 kubelet[2481]: I0710 00:31:25.546826 2481 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:31:25.549582 kubelet[2481]: E0710 00:31:25.549450 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:25.669177 kubelet[2481]: E0710 00:31:25.669146 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:26.611076 kernel: bpftool[4198]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 10 00:31:26.773919 systemd-networkd[1371]: vxlan.calico: Link UP Jul 10 00:31:26.773930 systemd-networkd[1371]: vxlan.calico: Gained carrier Jul 10 00:31:27.508221 containerd[1430]: time="2025-07-10T00:31:27.508129944Z" level=info msg="StopPodSandbox for \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\"" Jul 10 00:31:27.558235 kubelet[2481]: I0710 00:31:27.558160 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-859b87475d-px9qr" podStartSLOduration=3.999656959 podStartE2EDuration="6.557621786s" podCreationTimestamp="2025-07-10 00:31:21 +0000 UTC" firstStartedPulling="2025-07-10 00:31:22.320517352 +0000 UTC m=+32.915598474" lastFinishedPulling="2025-07-10 00:31:24.878482139 +0000 UTC m=+35.473563301" observedRunningTime="2025-07-10 00:31:25.679471579 +0000 UTC m=+36.274552741" watchObservedRunningTime="2025-07-10 00:31:27.557621786 +0000 UTC m=+38.152702908" Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.557 [INFO][4284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.559 [INFO][4284] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" iface="eth0" netns="/var/run/netns/cni-b81d90e8-252a-2bd3-5b44-3cd0bd24d03b" Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.559 [INFO][4284] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" iface="eth0" netns="/var/run/netns/cni-b81d90e8-252a-2bd3-5b44-3cd0bd24d03b" Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.559 [INFO][4284] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" iface="eth0" netns="/var/run/netns/cni-b81d90e8-252a-2bd3-5b44-3cd0bd24d03b" Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.559 [INFO][4284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.559 [INFO][4284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.581 [INFO][4292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" HandleID="k8s-pod-network.81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.581 [INFO][4292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.581 [INFO][4292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.591 [WARNING][4292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" HandleID="k8s-pod-network.81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.591 [INFO][4292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" HandleID="k8s-pod-network.81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.592 [INFO][4292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:27.597132 containerd[1430]: 2025-07-10 00:31:27.594 [INFO][4284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:27.597629 containerd[1430]: time="2025-07-10T00:31:27.597361819Z" level=info msg="TearDown network for sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\" successfully" Jul 10 00:31:27.597629 containerd[1430]: time="2025-07-10T00:31:27.597391100Z" level=info msg="StopPodSandbox for \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\" returns successfully" Jul 10 00:31:27.599389 systemd[1]: run-netns-cni\x2db81d90e8\x2d252a\x2d2bd3\x2d5b44\x2d3cd0bd24d03b.mount: Deactivated successfully. Jul 10 00:31:27.600903 containerd[1430]: time="2025-07-10T00:31:27.600743801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8l5v7,Uid:08557a49-6fcf-4236-a001-85a4edaa7064,Namespace:calico-system,Attempt:1,}" Jul 10 00:31:27.723788 systemd-networkd[1371]: calid31ece3a92d: Link UP Jul 10 00:31:27.724018 systemd-networkd[1371]: calid31ece3a92d: Gained carrier Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.650 [INFO][4302] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8l5v7-eth0 csi-node-driver- calico-system 08557a49-6fcf-4236-a001-85a4edaa7064 940 0 2025-07-10 00:31:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8l5v7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid31ece3a92d [] [] }} ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Namespace="calico-system" Pod="csi-node-driver-8l5v7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8l5v7-" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.650 [INFO][4302] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Namespace="calico-system" Pod="csi-node-driver-8l5v7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.674 [INFO][4314] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" HandleID="k8s-pod-network.6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.674 [INFO][4314] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" HandleID="k8s-pod-network.6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8l5v7", "timestamp":"2025-07-10 00:31:27.674583428 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.674 [INFO][4314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.674 [INFO][4314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.674 [INFO][4314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.685 [INFO][4314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" host="localhost" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.691 [INFO][4314] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.697 [INFO][4314] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.699 [INFO][4314] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.701 [INFO][4314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.702 [INFO][4314] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" host="localhost" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.703 [INFO][4314] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.710 [INFO][4314] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" host="localhost" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.717 [INFO][4314] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" host="localhost" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.717 [INFO][4314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" host="localhost" Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.717 [INFO][4314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:27.746074 containerd[1430]: 2025-07-10 00:31:27.717 [INFO][4314] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" HandleID="k8s-pod-network.6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:27.746846 containerd[1430]: 2025-07-10 00:31:27.719 [INFO][4302] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Namespace="calico-system" Pod="csi-node-driver-8l5v7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8l5v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8l5v7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08557a49-6fcf-4236-a001-85a4edaa7064", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8l5v7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid31ece3a92d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:27.746846 containerd[1430]: 2025-07-10 00:31:27.720 [INFO][4302] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Namespace="calico-system" Pod="csi-node-driver-8l5v7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:27.746846 containerd[1430]: 2025-07-10 00:31:27.720 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid31ece3a92d ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Namespace="calico-system" Pod="csi-node-driver-8l5v7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:27.746846 containerd[1430]: 2025-07-10 00:31:27.723 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Namespace="calico-system" Pod="csi-node-driver-8l5v7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:27.746846 containerd[1430]: 2025-07-10 00:31:27.724 [INFO][4302] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Namespace="calico-system" Pod="csi-node-driver-8l5v7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8l5v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8l5v7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08557a49-6fcf-4236-a001-85a4edaa7064", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f", Pod:"csi-node-driver-8l5v7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid31ece3a92d", MAC:"36:a6:f1:9f:6d:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:27.746846 containerd[1430]: 2025-07-10 00:31:27.742 [INFO][4302] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f" Namespace="calico-system" Pod="csi-node-driver-8l5v7" WorkloadEndpoint="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:27.766197 containerd[1430]: time="2025-07-10T00:31:27.765943233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:27.766197 containerd[1430]: time="2025-07-10T00:31:27.766010556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:27.766197 containerd[1430]: time="2025-07-10T00:31:27.766033637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:27.766527 containerd[1430]: time="2025-07-10T00:31:27.766165522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:27.803259 systemd[1]: Started cri-containerd-6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f.scope - libcontainer container 6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f. Jul 10 00:31:27.815243 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:31:27.835005 containerd[1430]: time="2025-07-10T00:31:27.834963658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8l5v7,Uid:08557a49-6fcf-4236-a001-85a4edaa7064,Namespace:calico-system,Attempt:1,} returns sandbox id \"6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f\"" Jul 10 00:31:27.837337 containerd[1430]: time="2025-07-10T00:31:27.837289635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:31:28.045297 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jul 10 00:31:28.508307 containerd[1430]: time="2025-07-10T00:31:28.508269686Z" level=info msg="StopPodSandbox for \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\"" Jul 10 00:31:28.508657 containerd[1430]: time="2025-07-10T00:31:28.508545657Z" level=info msg="StopPodSandbox for \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\"" Jul 10 00:31:28.509577 containerd[1430]: time="2025-07-10T00:31:28.509304168Z" level=info msg="StopPodSandbox for \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\"" Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.596 [INFO][4406] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.597 [INFO][4406] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" iface="eth0" netns="/var/run/netns/cni-c96ac176-1baf-b261-61f1-e6e38d5dba34" Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.597 [INFO][4406] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" iface="eth0" netns="/var/run/netns/cni-c96ac176-1baf-b261-61f1-e6e38d5dba34" Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.597 [INFO][4406] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" iface="eth0" netns="/var/run/netns/cni-c96ac176-1baf-b261-61f1-e6e38d5dba34" Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.597 [INFO][4406] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.597 [INFO][4406] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.623 [INFO][4429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" HandleID="k8s-pod-network.5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.623 [INFO][4429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.623 [INFO][4429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.646 [WARNING][4429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" HandleID="k8s-pod-network.5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.646 [INFO][4429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" HandleID="k8s-pod-network.5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.652 [INFO][4429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:28.659827 containerd[1430]: 2025-07-10 00:31:28.657 [INFO][4406] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:28.663078 containerd[1430]: time="2025-07-10T00:31:28.661487597Z" level=info msg="TearDown network for sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\" successfully" Jul 10 00:31:28.663078 containerd[1430]: time="2025-07-10T00:31:28.661531038Z" level=info msg="StopPodSandbox for \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\" returns successfully" Jul 10 00:31:28.663078 containerd[1430]: time="2025-07-10T00:31:28.662504118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lhmtn,Uid:c30ef12d-7fea-496b-86fe-53d8caa8bd6a,Namespace:kube-system,Attempt:1,}" Jul 10 00:31:28.663197 kubelet[2481]: E0710 00:31:28.661960 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:28.663468 systemd[1]: run-netns-cni\x2dc96ac176\x2d1baf\x2db261\x2d61f1\x2de6e38d5dba34.mount: Deactivated successfully. Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.593 [INFO][4405] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.593 [INFO][4405] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" iface="eth0" netns="/var/run/netns/cni-df538eeb-0f90-c589-a531-801fc9fcf731" Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.595 [INFO][4405] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" iface="eth0" netns="/var/run/netns/cni-df538eeb-0f90-c589-a531-801fc9fcf731" Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.598 [INFO][4405] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" iface="eth0" netns="/var/run/netns/cni-df538eeb-0f90-c589-a531-801fc9fcf731" Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.598 [INFO][4405] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.598 [INFO][4405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.625 [INFO][4431] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" HandleID="k8s-pod-network.92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.625 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.652 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.666 [WARNING][4431] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" HandleID="k8s-pod-network.92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.666 [INFO][4431] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" HandleID="k8s-pod-network.92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.668 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:28.679633 containerd[1430]: 2025-07-10 00:31:28.672 [INFO][4405] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:28.681515 containerd[1430]: time="2025-07-10T00:31:28.679641500Z" level=info msg="TearDown network for sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\" successfully" Jul 10 00:31:28.681515 containerd[1430]: time="2025-07-10T00:31:28.679667541Z" level=info msg="StopPodSandbox for \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\" returns successfully" Jul 10 00:31:28.681515 containerd[1430]: time="2025-07-10T00:31:28.681248965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-qztlf,Uid:cd45c6d9-27eb-494a-9d3f-a28a02a70496,Namespace:calico-system,Attempt:1,}" Jul 10 00:31:28.682520 systemd[1]: run-netns-cni\x2ddf538eeb\x2d0f90\x2dc589\x2da531\x2d801fc9fcf731.mount: Deactivated successfully. Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.599 [INFO][4404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.600 [INFO][4404] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" iface="eth0" netns="/var/run/netns/cni-c5c6bb10-b91b-631d-01d5-9f2a99233952" Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.600 [INFO][4404] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" iface="eth0" netns="/var/run/netns/cni-c5c6bb10-b91b-631d-01d5-9f2a99233952" Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.600 [INFO][4404] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" iface="eth0" netns="/var/run/netns/cni-c5c6bb10-b91b-631d-01d5-9f2a99233952" Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.600 [INFO][4404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.600 [INFO][4404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.627 [INFO][4441] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" HandleID="k8s-pod-network.21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.628 [INFO][4441] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.668 [INFO][4441] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.678 [WARNING][4441] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" HandleID="k8s-pod-network.21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.679 [INFO][4441] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" HandleID="k8s-pod-network.21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.680 [INFO][4441] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:28.688993 containerd[1430]: 2025-07-10 00:31:28.685 [INFO][4404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:28.690279 containerd[1430]: time="2025-07-10T00:31:28.690181011Z" level=info msg="TearDown network for sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\" successfully" Jul 10 00:31:28.690279 containerd[1430]: time="2025-07-10T00:31:28.690213212Z" level=info msg="StopPodSandbox for \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\" returns successfully" Jul 10 00:31:28.691391 containerd[1430]: time="2025-07-10T00:31:28.691357579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c98486c8f-n2v22,Uid:fde35633-5bd3-4224-9472-f70c96f585a5,Namespace:calico-apiserver,Attempt:1,}" Jul 10 00:31:28.691442 systemd[1]: run-netns-cni\x2dc5c6bb10\x2db91b\x2d631d\x2d01d5\x2d9f2a99233952.mount: Deactivated successfully. Jul 10 00:31:28.926205 systemd-networkd[1371]: calif846b836c13: Link UP Jul 10 00:31:28.927174 systemd-networkd[1371]: calif846b836c13: Gained carrier Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.783 [INFO][4477] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--qztlf-eth0 goldmane-768f4c5c69- calico-system cd45c6d9-27eb-494a-9d3f-a28a02a70496 952 0 2025-07-10 00:31:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-qztlf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif846b836c13 [] [] }} ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Namespace="calico-system" Pod="goldmane-768f4c5c69-qztlf" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--qztlf-" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.783 [INFO][4477] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Namespace="calico-system" Pod="goldmane-768f4c5c69-qztlf" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.814 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" HandleID="k8s-pod-network.931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.814 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" HandleID="k8s-pod-network.931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137530), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-qztlf", "timestamp":"2025-07-10 00:31:28.814446817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.814 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.814 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.814 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.826 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" host="localhost" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.832 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.839 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.841 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.845 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.845 [INFO][4510] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" host="localhost" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.847 [INFO][4510] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9 Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.909 [INFO][4510] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" host="localhost" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.919 [INFO][4510] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" host="localhost" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.920 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" host="localhost" Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.920 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:28.946213 containerd[1430]: 2025-07-10 00:31:28.920 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" HandleID="k8s-pod-network.931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:28.948373 containerd[1430]: 2025-07-10 00:31:28.922 [INFO][4477] cni-plugin/k8s.go 418: Populated endpoint ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Namespace="calico-system" Pod="goldmane-768f4c5c69-qztlf" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--qztlf-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"cd45c6d9-27eb-494a-9d3f-a28a02a70496", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-qztlf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif846b836c13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:28.948373 containerd[1430]: 2025-07-10 00:31:28.923 [INFO][4477] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Namespace="calico-system" Pod="goldmane-768f4c5c69-qztlf" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:28.948373 containerd[1430]: 2025-07-10 00:31:28.923 [INFO][4477] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif846b836c13 ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Namespace="calico-system" Pod="goldmane-768f4c5c69-qztlf" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:28.948373 containerd[1430]: 2025-07-10 00:31:28.927 [INFO][4477] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Namespace="calico-system" Pod="goldmane-768f4c5c69-qztlf" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:28.948373 containerd[1430]: 2025-07-10 00:31:28.927 [INFO][4477] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Namespace="calico-system" Pod="goldmane-768f4c5c69-qztlf" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--qztlf-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"cd45c6d9-27eb-494a-9d3f-a28a02a70496", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9", Pod:"goldmane-768f4c5c69-qztlf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif846b836c13", MAC:"56:1e:b2:89:06:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:28.948373 containerd[1430]: 2025-07-10 00:31:28.939 [INFO][4477] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9" Namespace="calico-system" Pod="goldmane-768f4c5c69-qztlf" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:28.949405 containerd[1430]: time="2025-07-10T00:31:28.948905360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:28.952665 containerd[1430]: time="2025-07-10T00:31:28.952621232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 10 00:31:28.953880 containerd[1430]: time="2025-07-10T00:31:28.953850922Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:28.960735 containerd[1430]: time="2025-07-10T00:31:28.960576917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:28.963316 containerd[1430]: time="2025-07-10T00:31:28.962619441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.125287923s" Jul 10 00:31:28.963531 containerd[1430]: time="2025-07-10T00:31:28.963503077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 10 00:31:28.971736 containerd[1430]: time="2025-07-10T00:31:28.971645050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:28.972061 containerd[1430]: time="2025-07-10T00:31:28.971723414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:28.972061 containerd[1430]: time="2025-07-10T00:31:28.971738974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:28.972285 containerd[1430]: time="2025-07-10T00:31:28.971763375Z" level=info msg="CreateContainer within sandbox \"6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:31:28.973325 containerd[1430]: time="2025-07-10T00:31:28.972597809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:28.987015 systemd-networkd[1371]: cali09ecdac462f: Link UP Jul 10 00:31:28.987709 systemd-networkd[1371]: cali09ecdac462f: Gained carrier Jul 10 00:31:29.000092 containerd[1430]: time="2025-07-10T00:31:28.998713598Z" level=info msg="CreateContainer within sandbox \"6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3cf9058b180d8beb80ebc90c8b9aa0c3d136cc434cd435c1573842aecda10f3d\"" Jul 10 00:31:29.000317 containerd[1430]: time="2025-07-10T00:31:29.000289183Z" level=info msg="StartContainer for \"3cf9058b180d8beb80ebc90c8b9aa0c3d136cc434cd435c1573842aecda10f3d\"" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.772 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0 coredns-674b8bbfcf- kube-system c30ef12d-7fea-496b-86fe-53d8caa8bd6a 953 0 2025-07-10 00:30:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-lhmtn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09ecdac462f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lhmtn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lhmtn-" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.772 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lhmtn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.826 [INFO][4503] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" HandleID="k8s-pod-network.eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.826 [INFO][4503] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" HandleID="k8s-pod-network.eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-lhmtn", "timestamp":"2025-07-10 00:31:28.826392266 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.827 [INFO][4503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.920 [INFO][4503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.920 [INFO][4503] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.934 [INFO][4503] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" host="localhost" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.942 [INFO][4503] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.954 [INFO][4503] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.957 [INFO][4503] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.960 [INFO][4503] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.960 [INFO][4503] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" host="localhost" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.962 [INFO][4503] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.967 [INFO][4503] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" host="localhost" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.973 [INFO][4503] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" host="localhost" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.974 [INFO][4503] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" host="localhost" Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.974 [INFO][4503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:29.004141 containerd[1430]: 2025-07-10 00:31:28.974 [INFO][4503] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" HandleID="k8s-pod-network.eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:29.004668 containerd[1430]: 2025-07-10 00:31:28.978 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lhmtn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c30ef12d-7fea-496b-86fe-53d8caa8bd6a", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 30, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-lhmtn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09ecdac462f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:29.004668 containerd[1430]: 2025-07-10 00:31:28.978 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lhmtn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:29.004668 containerd[1430]: 2025-07-10 00:31:28.978 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09ecdac462f ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lhmtn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:29.004668 containerd[1430]: 2025-07-10 00:31:28.987 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lhmtn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:29.004668 containerd[1430]: 2025-07-10 00:31:28.987 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lhmtn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c30ef12d-7fea-496b-86fe-53d8caa8bd6a", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 30, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f", Pod:"coredns-674b8bbfcf-lhmtn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09ecdac462f", MAC:"b6:96:1f:9c:f9:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:29.004668 containerd[1430]: 2025-07-10 00:31:28.998 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f" Namespace="kube-system" Pod="coredns-674b8bbfcf-lhmtn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:29.027942 containerd[1430]: time="2025-07-10T00:31:29.027508148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:29.027942 containerd[1430]: time="2025-07-10T00:31:29.027572951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:29.027942 containerd[1430]: time="2025-07-10T00:31:29.027590191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:29.027942 containerd[1430]: time="2025-07-10T00:31:29.027829921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:29.038333 systemd[1]: Started cri-containerd-931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9.scope - libcontainer container 931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9. Jul 10 00:31:29.042796 systemd[1]: Started cri-containerd-3cf9058b180d8beb80ebc90c8b9aa0c3d136cc434cd435c1573842aecda10f3d.scope - libcontainer container 3cf9058b180d8beb80ebc90c8b9aa0c3d136cc434cd435c1573842aecda10f3d. Jul 10 00:31:29.068375 systemd[1]: Started cri-containerd-eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f.scope - libcontainer container eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f. Jul 10 00:31:29.071383 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:31:29.083103 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:31:29.090012 systemd-networkd[1371]: calic7fe9c4d7af: Link UP Jul 10 00:31:29.093224 systemd-networkd[1371]: calic7fe9c4d7af: Gained carrier Jul 10 00:31:29.105310 containerd[1430]: time="2025-07-10T00:31:29.105252286Z" level=info msg="StartContainer for \"3cf9058b180d8beb80ebc90c8b9aa0c3d136cc434cd435c1573842aecda10f3d\" returns successfully" Jul 10 00:31:29.108937 containerd[1430]: time="2025-07-10T00:31:29.108894271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:28.794 [INFO][4465] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0 calico-apiserver-c98486c8f- calico-apiserver fde35633-5bd3-4224-9472-f70c96f585a5 954 0 2025-07-10 00:31:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c98486c8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c98486c8f-n2v22 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic7fe9c4d7af [] [] }} ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-n2v22" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--n2v22-" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:28.795 [INFO][4465] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-n2v22" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:28.841 [INFO][4519] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" HandleID="k8s-pod-network.98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:28.841 [INFO][4519] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" HandleID="k8s-pod-network.98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001373a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c98486c8f-n2v22", "timestamp":"2025-07-10 00:31:28.841201032 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:28.841 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:28.977 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:28.977 [INFO][4519] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.036 [INFO][4519] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" host="localhost" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.044 [INFO][4519] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.060 [INFO][4519] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.063 [INFO][4519] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.065 [INFO][4519] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.065 [INFO][4519] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" host="localhost" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.067 [INFO][4519] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392 Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.072 [INFO][4519] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" host="localhost" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.082 [INFO][4519] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" host="localhost" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.082 [INFO][4519] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" host="localhost" Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.082 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:29.118373 containerd[1430]: 2025-07-10 00:31:29.082 [INFO][4519] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" HandleID="k8s-pod-network.98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:29.118950 containerd[1430]: 2025-07-10 00:31:29.086 [INFO][4465] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-n2v22" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0", GenerateName:"calico-apiserver-c98486c8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"fde35633-5bd3-4224-9472-f70c96f585a5", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c98486c8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c98486c8f-n2v22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7fe9c4d7af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:29.118950 containerd[1430]: 2025-07-10 00:31:29.086 [INFO][4465] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-n2v22" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:29.118950 containerd[1430]: 2025-07-10 00:31:29.087 [INFO][4465] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7fe9c4d7af ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-n2v22" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:29.118950 containerd[1430]: 2025-07-10 00:31:29.093 [INFO][4465] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-n2v22" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:29.118950 containerd[1430]: 2025-07-10 00:31:29.093 [INFO][4465] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-n2v22" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0", GenerateName:"calico-apiserver-c98486c8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"fde35633-5bd3-4224-9472-f70c96f585a5", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c98486c8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392", Pod:"calico-apiserver-c98486c8f-n2v22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7fe9c4d7af", MAC:"8e:8a:f0:d3:12:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:29.118950 containerd[1430]: 2025-07-10 00:31:29.110 [INFO][4465] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-n2v22" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:29.132218 containerd[1430]: time="2025-07-10T00:31:29.131861466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-qztlf,Uid:cd45c6d9-27eb-494a-9d3f-a28a02a70496,Namespace:calico-system,Attempt:1,} returns sandbox id \"931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9\"" Jul 10 00:31:29.161163 containerd[1430]: time="2025-07-10T00:31:29.160478326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:29.161163 containerd[1430]: time="2025-07-10T00:31:29.160577450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:29.161163 containerd[1430]: time="2025-07-10T00:31:29.160600811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:29.161473 containerd[1430]: time="2025-07-10T00:31:29.161292598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:29.161504 containerd[1430]: time="2025-07-10T00:31:29.161466925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lhmtn,Uid:c30ef12d-7fea-496b-86fe-53d8caa8bd6a,Namespace:kube-system,Attempt:1,} returns sandbox id \"eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f\"" Jul 10 00:31:29.162707 kubelet[2481]: E0710 00:31:29.162653 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:29.168806 containerd[1430]: time="2025-07-10T00:31:29.168325279Z" level=info msg="CreateContainer within sandbox \"eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:31:29.188595 containerd[1430]: time="2025-07-10T00:31:29.188476362Z" level=info msg="CreateContainer within sandbox \"eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"71af3811c645c360db31194bf7259c9f655098ad037202948c551c19c04a496f\"" Jul 10 00:31:29.189181 containerd[1430]: time="2025-07-10T00:31:29.189151188Z" level=info msg="StartContainer for \"71af3811c645c360db31194bf7259c9f655098ad037202948c551c19c04a496f\"" Jul 10 00:31:29.191315 systemd[1]: Started cri-containerd-98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392.scope - libcontainer container 98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392. Jul 10 00:31:29.207101 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:31:29.215434 systemd[1]: Started cri-containerd-71af3811c645c360db31194bf7259c9f655098ad037202948c551c19c04a496f.scope - libcontainer container 71af3811c645c360db31194bf7259c9f655098ad037202948c551c19c04a496f. Jul 10 00:31:29.229384 containerd[1430]: time="2025-07-10T00:31:29.229334750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c98486c8f-n2v22,Uid:fde35633-5bd3-4224-9472-f70c96f585a5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392\"" Jul 10 00:31:29.246470 containerd[1430]: time="2025-07-10T00:31:29.246375308Z" level=info msg="StartContainer for \"71af3811c645c360db31194bf7259c9f655098ad037202948c551c19c04a496f\" returns successfully" Jul 10 00:31:29.509487 containerd[1430]: time="2025-07-10T00:31:29.509165619Z" level=info msg="StopPodSandbox for \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\"" Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.556 [INFO][4767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.557 [INFO][4767] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" iface="eth0" netns="/var/run/netns/cni-15a1e4eb-d03a-c252-1069-84be31d405d4" Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.557 [INFO][4767] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" iface="eth0" netns="/var/run/netns/cni-15a1e4eb-d03a-c252-1069-84be31d405d4" Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.558 [INFO][4767] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" iface="eth0" netns="/var/run/netns/cni-15a1e4eb-d03a-c252-1069-84be31d405d4" Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.558 [INFO][4767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.558 [INFO][4767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.577 [INFO][4776] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" HandleID="k8s-pod-network.259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.577 [INFO][4776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.577 [INFO][4776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.587 [WARNING][4776] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" HandleID="k8s-pod-network.259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.587 [INFO][4776] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" HandleID="k8s-pod-network.259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.589 [INFO][4776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:29.593478 containerd[1430]: 2025-07-10 00:31:29.591 [INFO][4767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:29.593934 containerd[1430]: time="2025-07-10T00:31:29.593676506Z" level=info msg="TearDown network for sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\" successfully" Jul 10 00:31:29.593934 containerd[1430]: time="2025-07-10T00:31:29.593701827Z" level=info msg="StopPodSandbox for \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\" returns successfully" Jul 10 00:31:29.594360 containerd[1430]: time="2025-07-10T00:31:29.594322412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c98486c8f-5xk6f,Uid:42229c1c-0bab-4152-99c9-e48ab9263892,Namespace:calico-apiserver,Attempt:1,}" Jul 10 00:31:29.669270 systemd[1]: run-netns-cni\x2d15a1e4eb\x2dd03a\x2dc252\x2d1069\x2d84be31d405d4.mount: Deactivated successfully. Jul 10 00:31:29.689496 kubelet[2481]: E0710 00:31:29.689458 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:29.746742 systemd-networkd[1371]: calid854274c524: Link UP Jul 10 00:31:29.747061 systemd-networkd[1371]: calid854274c524: Gained carrier Jul 10 00:31:29.762371 kubelet[2481]: I0710 00:31:29.761979 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lhmtn" podStartSLOduration=35.761961731 podStartE2EDuration="35.761961731s" podCreationTimestamp="2025-07-10 00:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:31:29.703818415 +0000 UTC m=+40.298899617" watchObservedRunningTime="2025-07-10 00:31:29.761961731 +0000 UTC m=+40.357042893" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.642 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0 calico-apiserver-c98486c8f- calico-apiserver 42229c1c-0bab-4152-99c9-e48ab9263892 977 0 2025-07-10 00:31:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c98486c8f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c98486c8f-5xk6f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid854274c524 [] [] }} ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-5xk6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.643 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-5xk6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.680 [INFO][4799] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" HandleID="k8s-pod-network.b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.680 [INFO][4799] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" HandleID="k8s-pod-network.b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d4b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c98486c8f-5xk6f", "timestamp":"2025-07-10 00:31:29.680548928 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.680 [INFO][4799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.680 [INFO][4799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.680 [INFO][4799] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.696 [INFO][4799] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" host="localhost" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.709 [INFO][4799] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.721 [INFO][4799] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.725 [INFO][4799] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.728 [INFO][4799] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.728 [INFO][4799] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" host="localhost" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.730 [INFO][4799] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446 Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.734 [INFO][4799] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" host="localhost" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.740 [INFO][4799] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" host="localhost" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.740 [INFO][4799] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" host="localhost" Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.740 [INFO][4799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:29.765805 containerd[1430]: 2025-07-10 00:31:29.740 [INFO][4799] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" HandleID="k8s-pod-network.b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:29.766440 containerd[1430]: 2025-07-10 00:31:29.743 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-5xk6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0", GenerateName:"calico-apiserver-c98486c8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"42229c1c-0bab-4152-99c9-e48ab9263892", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c98486c8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c98486c8f-5xk6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid854274c524", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:29.766440 containerd[1430]: 2025-07-10 00:31:29.743 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-5xk6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:29.766440 containerd[1430]: 2025-07-10 00:31:29.743 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid854274c524 ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-5xk6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:29.766440 containerd[1430]: 2025-07-10 00:31:29.749 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-5xk6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:29.766440 containerd[1430]: 2025-07-10 00:31:29.752 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-5xk6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0", GenerateName:"calico-apiserver-c98486c8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"42229c1c-0bab-4152-99c9-e48ab9263892", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c98486c8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446", Pod:"calico-apiserver-c98486c8f-5xk6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid854274c524", MAC:"7e:cb:2b:57:cc:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:29.766440 containerd[1430]: 2025-07-10 00:31:29.761 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446" Namespace="calico-apiserver" Pod="calico-apiserver-c98486c8f-5xk6f" WorkloadEndpoint="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:29.773471 systemd-networkd[1371]: calid31ece3a92d: Gained IPv6LL Jul 10 00:31:29.787033 containerd[1430]: time="2025-07-10T00:31:29.786263580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:29.787033 containerd[1430]: time="2025-07-10T00:31:29.786854083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:29.787033 containerd[1430]: time="2025-07-10T00:31:29.786868564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:29.787033 containerd[1430]: time="2025-07-10T00:31:29.786956967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:29.803325 systemd[1]: run-containerd-runc-k8s.io-b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446-runc.a3nk9X.mount: Deactivated successfully. Jul 10 00:31:29.814240 systemd[1]: Started cri-containerd-b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446.scope - libcontainer container b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446. Jul 10 00:31:29.826112 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:31:29.844788 containerd[1430]: time="2025-07-10T00:31:29.844731549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c98486c8f-5xk6f,Uid:42229c1c-0bab-4152-99c9-e48ab9263892,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446\"" Jul 10 00:31:29.905771 kubelet[2481]: I0710 00:31:29.905726 2481 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:31:30.247002 systemd[1]: Started sshd@7-10.0.0.74:22-10.0.0.1:58256.service - OpenSSH per-connection server daemon (10.0.0.1:58256). Jul 10 00:31:30.325126 sshd[4913]: Accepted publickey for core from 10.0.0.1 port 58256 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:30.327225 sshd[4913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:30.332616 systemd-logind[1417]: New session 8 of user core. Jul 10 00:31:30.342228 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:31:30.346343 containerd[1430]: time="2025-07-10T00:31:30.346300823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:30.347477 containerd[1430]: time="2025-07-10T00:31:30.347284301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 10 00:31:30.348511 containerd[1430]: time="2025-07-10T00:31:30.348474027Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:30.350770 containerd[1430]: time="2025-07-10T00:31:30.350446344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:30.351405 containerd[1430]: time="2025-07-10T00:31:30.351324658Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.242384146s" Jul 10 00:31:30.351405 containerd[1430]: time="2025-07-10T00:31:30.351378820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 10 00:31:30.352757 containerd[1430]: time="2025-07-10T00:31:30.352712632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:31:30.356729 containerd[1430]: time="2025-07-10T00:31:30.356487979Z" level=info msg="CreateContainer within sandbox \"6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:31:30.369528 containerd[1430]: time="2025-07-10T00:31:30.369477563Z" level=info msg="CreateContainer within sandbox \"6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1c0eceee726cf6c1838656455aff83c18e6f1d6f8d8003e20c8551126c5ee93d\"" Jul 10 00:31:30.370347 containerd[1430]: time="2025-07-10T00:31:30.370221792Z" level=info msg="StartContainer for \"1c0eceee726cf6c1838656455aff83c18e6f1d6f8d8003e20c8551126c5ee93d\"" Jul 10 00:31:30.394229 systemd[1]: Started cri-containerd-1c0eceee726cf6c1838656455aff83c18e6f1d6f8d8003e20c8551126c5ee93d.scope - libcontainer container 1c0eceee726cf6c1838656455aff83c18e6f1d6f8d8003e20c8551126c5ee93d. Jul 10 00:31:30.426074 containerd[1430]: time="2025-07-10T00:31:30.424462178Z" level=info msg="StartContainer for \"1c0eceee726cf6c1838656455aff83c18e6f1d6f8d8003e20c8551126c5ee93d\" returns successfully" Jul 10 00:31:30.477272 systemd-networkd[1371]: cali09ecdac462f: Gained IPv6LL Jul 10 00:31:30.508547 containerd[1430]: time="2025-07-10T00:31:30.508437479Z" level=info msg="StopPodSandbox for \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\"" Jul 10 00:31:30.524935 containerd[1430]: time="2025-07-10T00:31:30.524033804Z" level=info msg="StopPodSandbox for \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\"" Jul 10 00:31:30.602514 kubelet[2481]: I0710 00:31:30.599588 2481 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:31:30.604207 kubelet[2481]: I0710 00:31:30.604166 2481 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.588 [INFO][4975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.588 [INFO][4975] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" iface="eth0" netns="/var/run/netns/cni-277b6d3e-3e20-5125-6ff6-a4b4e6593243" Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.588 [INFO][4975] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" iface="eth0" netns="/var/run/netns/cni-277b6d3e-3e20-5125-6ff6-a4b4e6593243" Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.588 [INFO][4975] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" iface="eth0" netns="/var/run/netns/cni-277b6d3e-3e20-5125-6ff6-a4b4e6593243" Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.588 [INFO][4975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.588 [INFO][4975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.624 [INFO][5002] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" HandleID="k8s-pod-network.4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.624 [INFO][5002] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.624 [INFO][5002] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.642 [WARNING][5002] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" HandleID="k8s-pod-network.4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.642 [INFO][5002] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" HandleID="k8s-pod-network.4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.644 [INFO][5002] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:30.653322 containerd[1430]: 2025-07-10 00:31:30.646 [INFO][4975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:30.653322 containerd[1430]: time="2025-07-10T00:31:30.652442230Z" level=info msg="TearDown network for sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\" successfully" Jul 10 00:31:30.653322 containerd[1430]: time="2025-07-10T00:31:30.652475111Z" level=info msg="StopPodSandbox for \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\" returns successfully" Jul 10 00:31:30.653322 containerd[1430]: time="2025-07-10T00:31:30.653288823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4wp2k,Uid:c3848646-0b29-4349-9a03-f64c3a70a1ee,Namespace:kube-system,Attempt:1,}" Jul 10 00:31:30.653780 kubelet[2481]: E0710 00:31:30.652810 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:30.663764 systemd[1]: run-netns-cni\x2d277b6d3e\x2d3e20\x2d5125\x2d6ff6\x2da4b4e6593243.mount: Deactivated successfully. Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.592 [INFO][4992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.592 [INFO][4992] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" iface="eth0" netns="/var/run/netns/cni-075f379e-0d91-8f06-5ed2-741308a06868" Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.593 [INFO][4992] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" iface="eth0" netns="/var/run/netns/cni-075f379e-0d91-8f06-5ed2-741308a06868" Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.594 [INFO][4992] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" iface="eth0" netns="/var/run/netns/cni-075f379e-0d91-8f06-5ed2-741308a06868" Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.594 [INFO][4992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.594 [INFO][4992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.641 [INFO][5008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" HandleID="k8s-pod-network.86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.641 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.646 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.663 [WARNING][5008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" HandleID="k8s-pod-network.86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.664 [INFO][5008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" HandleID="k8s-pod-network.86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.669 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:30.680893 containerd[1430]: 2025-07-10 00:31:30.675 [INFO][4992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:30.682085 sshd[4913]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:30.686468 systemd[1]: sshd@7-10.0.0.74:22-10.0.0.1:58256.service: Deactivated successfully. Jul 10 00:31:30.688573 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:31:30.689322 systemd-logind[1417]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:31:30.691921 systemd-logind[1417]: Removed session 8. Jul 10 00:31:30.700078 containerd[1430]: time="2025-07-10T00:31:30.698608903Z" level=info msg="TearDown network for sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\" successfully" Jul 10 00:31:30.700078 containerd[1430]: time="2025-07-10T00:31:30.698645224Z" level=info msg="StopPodSandbox for \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\" returns successfully" Jul 10 00:31:30.700078 containerd[1430]: time="2025-07-10T00:31:30.699509978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cdc7dc74d-2zddx,Uid:44306273-2b03-49e5-af4b-bfd726f65b5f,Namespace:calico-system,Attempt:1,}" Jul 10 00:31:30.701040 systemd[1]: run-netns-cni\x2d075f379e\x2d0d91\x2d8f06\x2d5ed2\x2d741308a06868.mount: Deactivated successfully. Jul 10 00:31:30.712427 kubelet[2481]: E0710 00:31:30.712390 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:30.734027 systemd-networkd[1371]: calif846b836c13: Gained IPv6LL Jul 10 00:31:30.734299 systemd-networkd[1371]: calic7fe9c4d7af: Gained IPv6LL Jul 10 00:31:30.747279 kubelet[2481]: I0710 00:31:30.746324 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8l5v7" podStartSLOduration=19.230219088 podStartE2EDuration="21.746296274s" podCreationTimestamp="2025-07-10 00:31:09 +0000 UTC" firstStartedPulling="2025-07-10 00:31:27.83622095 +0000 UTC m=+38.431302072" lastFinishedPulling="2025-07-10 00:31:30.352298096 +0000 UTC m=+40.947379258" observedRunningTime="2025-07-10 00:31:30.732645624 +0000 UTC m=+41.327726786" watchObservedRunningTime="2025-07-10 00:31:30.746296274 +0000 UTC m=+41.341377396" Jul 10 00:31:30.833759 systemd-networkd[1371]: calib2d91cdfaa3: Link UP Jul 10 00:31:30.834785 systemd-networkd[1371]: calib2d91cdfaa3: Gained carrier Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.728 [INFO][5019] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0 coredns-674b8bbfcf- kube-system c3848646-0b29-4349-9a03-f64c3a70a1ee 1026 0 2025-07-10 00:30:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-4wp2k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib2d91cdfaa3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wp2k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wp2k-" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.728 [INFO][5019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wp2k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.772 [INFO][5048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" HandleID="k8s-pod-network.e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.772 [INFO][5048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" HandleID="k8s-pod-network.e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c30f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-4wp2k", "timestamp":"2025-07-10 00:31:30.772323685 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.772 [INFO][5048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.772 [INFO][5048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.772 [INFO][5048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.785 [INFO][5048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" host="localhost" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.791 [INFO][5048] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.796 [INFO][5048] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.799 [INFO][5048] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.802 [INFO][5048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.802 [INFO][5048] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" host="localhost" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.805 [INFO][5048] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.815 [INFO][5048] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" host="localhost" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.824 [INFO][5048] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" host="localhost" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.824 [INFO][5048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" host="localhost" Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.824 [INFO][5048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:30.849416 containerd[1430]: 2025-07-10 00:31:30.824 [INFO][5048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" HandleID="k8s-pod-network.e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:30.849953 containerd[1430]: 2025-07-10 00:31:30.830 [INFO][5019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wp2k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c3848646-0b29-4349-9a03-f64c3a70a1ee", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 30, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-4wp2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2d91cdfaa3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:30.849953 containerd[1430]: 2025-07-10 00:31:30.830 [INFO][5019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wp2k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:30.849953 containerd[1430]: 2025-07-10 00:31:30.830 [INFO][5019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2d91cdfaa3 ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wp2k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:30.849953 containerd[1430]: 2025-07-10 00:31:30.834 [INFO][5019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wp2k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:30.849953 containerd[1430]: 2025-07-10 00:31:30.834 [INFO][5019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wp2k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c3848646-0b29-4349-9a03-f64c3a70a1ee", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 30, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c", Pod:"coredns-674b8bbfcf-4wp2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2d91cdfaa3", MAC:"d2:a2:88:38:a1:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:30.849953 containerd[1430]: 2025-07-10 00:31:30.846 [INFO][5019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c" Namespace="kube-system" Pod="coredns-674b8bbfcf-4wp2k" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:30.868943 containerd[1430]: time="2025-07-10T00:31:30.868694307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:30.868943 containerd[1430]: time="2025-07-10T00:31:30.868753869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:30.868943 containerd[1430]: time="2025-07-10T00:31:30.868765349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:30.868943 containerd[1430]: time="2025-07-10T00:31:30.868847313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:30.888241 systemd[1]: Started cri-containerd-e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c.scope - libcontainer container e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c. Jul 10 00:31:30.906171 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:31:30.923578 systemd-networkd[1371]: cali0b166d72bd8: Link UP Jul 10 00:31:30.924437 systemd-networkd[1371]: cali0b166d72bd8: Gained carrier Jul 10 00:31:30.932650 containerd[1430]: time="2025-07-10T00:31:30.932487824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4wp2k,Uid:c3848646-0b29-4349-9a03-f64c3a70a1ee,Namespace:kube-system,Attempt:1,} returns sandbox id \"e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c\"" Jul 10 00:31:30.935721 kubelet[2481]: E0710 00:31:30.935682 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:30.944822 containerd[1430]: time="2025-07-10T00:31:30.944776661Z" level=info msg="CreateContainer within sandbox \"e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.775 [INFO][5033] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0 calico-kube-controllers-cdc7dc74d- calico-system 44306273-2b03-49e5-af4b-bfd726f65b5f 1027 0 2025-07-10 00:31:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cdc7dc74d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-cdc7dc74d-2zddx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0b166d72bd8 [] [] }} ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Namespace="calico-system" Pod="calico-kube-controllers-cdc7dc74d-2zddx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.775 [INFO][5033] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Namespace="calico-system" Pod="calico-kube-controllers-cdc7dc74d-2zddx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.812 [INFO][5059] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" HandleID="k8s-pod-network.f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.812 [INFO][5059] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" HandleID="k8s-pod-network.f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000504a50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-cdc7dc74d-2zddx", "timestamp":"2025-07-10 00:31:30.812288557 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.812 [INFO][5059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.824 [INFO][5059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.824 [INFO][5059] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.886 [INFO][5059] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" host="localhost" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.894 [INFO][5059] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.899 [INFO][5059] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.901 [INFO][5059] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.904 [INFO][5059] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.904 [INFO][5059] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" host="localhost" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.907 [INFO][5059] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.911 [INFO][5059] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" host="localhost" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.918 [INFO][5059] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" host="localhost" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.918 [INFO][5059] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" host="localhost" Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.918 [INFO][5059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:30.946753 containerd[1430]: 2025-07-10 00:31:30.918 [INFO][5059] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" HandleID="k8s-pod-network.f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:30.947571 containerd[1430]: 2025-07-10 00:31:30.921 [INFO][5033] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Namespace="calico-system" Pod="calico-kube-controllers-cdc7dc74d-2zddx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0", GenerateName:"calico-kube-controllers-cdc7dc74d-", Namespace:"calico-system", SelfLink:"", UID:"44306273-2b03-49e5-af4b-bfd726f65b5f", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cdc7dc74d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-cdc7dc74d-2zddx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b166d72bd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:30.947571 containerd[1430]: 2025-07-10 00:31:30.921 [INFO][5033] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Namespace="calico-system" Pod="calico-kube-controllers-cdc7dc74d-2zddx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:30.947571 containerd[1430]: 2025-07-10 00:31:30.921 [INFO][5033] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b166d72bd8 ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Namespace="calico-system" Pod="calico-kube-controllers-cdc7dc74d-2zddx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:30.947571 containerd[1430]: 2025-07-10 00:31:30.924 [INFO][5033] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Namespace="calico-system" Pod="calico-kube-controllers-cdc7dc74d-2zddx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:30.947571 containerd[1430]: 2025-07-10 00:31:30.924 [INFO][5033] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Namespace="calico-system" Pod="calico-kube-controllers-cdc7dc74d-2zddx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0", GenerateName:"calico-kube-controllers-cdc7dc74d-", Namespace:"calico-system", SelfLink:"", UID:"44306273-2b03-49e5-af4b-bfd726f65b5f", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cdc7dc74d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d", Pod:"calico-kube-controllers-cdc7dc74d-2zddx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b166d72bd8", MAC:"0a:f0:e8:43:f5:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:30.947571 containerd[1430]: 2025-07-10 00:31:30.938 [INFO][5033] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d" Namespace="calico-system" Pod="calico-kube-controllers-cdc7dc74d-2zddx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:30.960546 containerd[1430]: time="2025-07-10T00:31:30.960191019Z" level=info msg="CreateContainer within sandbox \"e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"382239cf65d4d7536f9a25c6aff24e4c7f338a56e950f01c3a1949c09b8adb8a\"" Jul 10 00:31:30.963157 containerd[1430]: time="2025-07-10T00:31:30.963089172Z" level=info msg="StartContainer for \"382239cf65d4d7536f9a25c6aff24e4c7f338a56e950f01c3a1949c09b8adb8a\"" Jul 10 00:31:30.968022 containerd[1430]: time="2025-07-10T00:31:30.967767153Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:31:30.968022 containerd[1430]: time="2025-07-10T00:31:30.967840636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:31:30.968022 containerd[1430]: time="2025-07-10T00:31:30.967856477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:30.968022 containerd[1430]: time="2025-07-10T00:31:30.967950001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:31:30.987261 systemd[1]: Started cri-containerd-f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d.scope - libcontainer container f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d. Jul 10 00:31:30.991395 systemd[1]: Started cri-containerd-382239cf65d4d7536f9a25c6aff24e4c7f338a56e950f01c3a1949c09b8adb8a.scope - libcontainer container 382239cf65d4d7536f9a25c6aff24e4c7f338a56e950f01c3a1949c09b8adb8a. Jul 10 00:31:31.004271 systemd-resolved[1308]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:31:31.022496 containerd[1430]: time="2025-07-10T00:31:31.021509101Z" level=info msg="StartContainer for \"382239cf65d4d7536f9a25c6aff24e4c7f338a56e950f01c3a1949c09b8adb8a\" returns successfully" Jul 10 00:31:31.028835 containerd[1430]: time="2025-07-10T00:31:31.028786776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cdc7dc74d-2zddx,Uid:44306273-2b03-49e5-af4b-bfd726f65b5f,Namespace:calico-system,Attempt:1,} returns sandbox id \"f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d\"" Jul 10 00:31:31.717308 kubelet[2481]: E0710 00:31:31.717273 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:31.718695 kubelet[2481]: E0710 00:31:31.718651 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:31.736794 kubelet[2481]: I0710 00:31:31.736728 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4wp2k" podStartSLOduration=37.736673668 podStartE2EDuration="37.736673668s" podCreationTimestamp="2025-07-10 00:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:31:31.735737833 +0000 UTC m=+42.330818995" watchObservedRunningTime="2025-07-10 00:31:31.736673668 +0000 UTC m=+42.331754830" Jul 10 00:31:31.758179 systemd-networkd[1371]: calid854274c524: Gained IPv6LL Jul 10 00:31:31.795670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603903253.mount: Deactivated successfully. Jul 10 00:31:32.184507 containerd[1430]: time="2025-07-10T00:31:32.184462465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:32.185969 containerd[1430]: time="2025-07-10T00:31:32.185933360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 10 00:31:32.190342 containerd[1430]: time="2025-07-10T00:31:32.190303801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.837549328s" Jul 10 00:31:32.190342 containerd[1430]: time="2025-07-10T00:31:32.190340403Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 10 00:31:32.191149 containerd[1430]: time="2025-07-10T00:31:32.191031348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:31:32.197629 containerd[1430]: time="2025-07-10T00:31:32.197589311Z" level=info msg="CreateContainer within sandbox \"931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:31:32.202930 containerd[1430]: time="2025-07-10T00:31:32.202840825Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:32.203772 containerd[1430]: time="2025-07-10T00:31:32.203738218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:32.206570 systemd-networkd[1371]: calib2d91cdfaa3: Gained IPv6LL Jul 10 00:31:32.211528 containerd[1430]: time="2025-07-10T00:31:32.211483945Z" level=info msg="CreateContainer within sandbox \"931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f9973f77b182947f8aab4ad708fe3b61a5637e89908e75d7c7b9cd911e3d8321\"" Jul 10 00:31:32.211926 containerd[1430]: time="2025-07-10T00:31:32.211893720Z" level=info msg="StartContainer for \"f9973f77b182947f8aab4ad708fe3b61a5637e89908e75d7c7b9cd911e3d8321\"" Jul 10 00:31:32.249265 systemd[1]: Started cri-containerd-f9973f77b182947f8aab4ad708fe3b61a5637e89908e75d7c7b9cd911e3d8321.scope - libcontainer container f9973f77b182947f8aab4ad708fe3b61a5637e89908e75d7c7b9cd911e3d8321. Jul 10 00:31:32.330111 containerd[1430]: time="2025-07-10T00:31:32.330035729Z" level=info msg="StartContainer for \"f9973f77b182947f8aab4ad708fe3b61a5637e89908e75d7c7b9cd911e3d8321\" returns successfully" Jul 10 00:31:32.720909 kubelet[2481]: E0710 00:31:32.720837 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:32.721817 kubelet[2481]: E0710 00:31:32.721746 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:32.737121 kubelet[2481]: I0710 00:31:32.737038 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-qztlf" podStartSLOduration=21.680811496 podStartE2EDuration="24.737022661s" podCreationTimestamp="2025-07-10 00:31:08 +0000 UTC" firstStartedPulling="2025-07-10 00:31:29.134709259 +0000 UTC m=+39.729790421" lastFinishedPulling="2025-07-10 00:31:32.190920424 +0000 UTC m=+42.786001586" observedRunningTime="2025-07-10 00:31:32.736861175 +0000 UTC m=+43.331942337" watchObservedRunningTime="2025-07-10 00:31:32.737022661 +0000 UTC m=+43.332103823" Jul 10 00:31:32.973269 systemd-networkd[1371]: cali0b166d72bd8: Gained IPv6LL Jul 10 00:31:33.615942 containerd[1430]: time="2025-07-10T00:31:33.615888850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:33.617616 containerd[1430]: time="2025-07-10T00:31:33.617582952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 10 00:31:33.618512 containerd[1430]: time="2025-07-10T00:31:33.618485064Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:33.620451 containerd[1430]: time="2025-07-10T00:31:33.620420974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:33.621304 containerd[1430]: time="2025-07-10T00:31:33.621273645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.430185015s" Jul 10 00:31:33.621374 containerd[1430]: time="2025-07-10T00:31:33.621307406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 00:31:33.622811 containerd[1430]: time="2025-07-10T00:31:33.622784980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:31:33.629792 containerd[1430]: time="2025-07-10T00:31:33.629407659Z" level=info msg="CreateContainer within sandbox \"98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:31:33.652143 containerd[1430]: time="2025-07-10T00:31:33.652091039Z" level=info msg="CreateContainer within sandbox \"98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"82f352d41ecb227c5e7cc34bf2a8b66c166ca4d789d446d280ad91d3d2915411\"" Jul 10 00:31:33.652701 containerd[1430]: time="2025-07-10T00:31:33.652630938Z" level=info msg="StartContainer for \"82f352d41ecb227c5e7cc34bf2a8b66c166ca4d789d446d280ad91d3d2915411\"" Jul 10 00:31:33.687261 systemd[1]: Started cri-containerd-82f352d41ecb227c5e7cc34bf2a8b66c166ca4d789d446d280ad91d3d2915411.scope - libcontainer container 82f352d41ecb227c5e7cc34bf2a8b66c166ca4d789d446d280ad91d3d2915411. Jul 10 00:31:33.723582 containerd[1430]: time="2025-07-10T00:31:33.723489860Z" level=info msg="StartContainer for \"82f352d41ecb227c5e7cc34bf2a8b66c166ca4d789d446d280ad91d3d2915411\" returns successfully" Jul 10 00:31:33.726819 kubelet[2481]: E0710 00:31:33.726723 2481 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:31:33.946071 containerd[1430]: time="2025-07-10T00:31:33.945926980Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:33.949130 containerd[1430]: time="2025-07-10T00:31:33.949087134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:31:33.951471 containerd[1430]: time="2025-07-10T00:31:33.951430899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 328.602478ms" Jul 10 00:31:33.951471 containerd[1430]: time="2025-07-10T00:31:33.951469020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 00:31:33.953142 containerd[1430]: time="2025-07-10T00:31:33.952910113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:31:33.957315 containerd[1430]: time="2025-07-10T00:31:33.957280071Z" level=info msg="CreateContainer within sandbox \"b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:31:33.972272 containerd[1430]: time="2025-07-10T00:31:33.972222011Z" level=info msg="CreateContainer within sandbox \"b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1171fa4fa202f9babb52d5d02a3b308d622b9c1f529b7f6ec107c33aaf06c608\"" Jul 10 00:31:33.972833 containerd[1430]: time="2025-07-10T00:31:33.972759310Z" level=info msg="StartContainer for \"1171fa4fa202f9babb52d5d02a3b308d622b9c1f529b7f6ec107c33aaf06c608\"" Jul 10 00:31:33.999230 systemd[1]: Started cri-containerd-1171fa4fa202f9babb52d5d02a3b308d622b9c1f529b7f6ec107c33aaf06c608.scope - libcontainer container 1171fa4fa202f9babb52d5d02a3b308d622b9c1f529b7f6ec107c33aaf06c608. Jul 10 00:31:34.044797 containerd[1430]: time="2025-07-10T00:31:34.044742958Z" level=info msg="StartContainer for \"1171fa4fa202f9babb52d5d02a3b308d622b9c1f529b7f6ec107c33aaf06c608\" returns successfully" Jul 10 00:31:34.729637 kubelet[2481]: I0710 00:31:34.729288 2481 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:31:34.746208 kubelet[2481]: I0710 00:31:34.745867 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c98486c8f-n2v22" podStartSLOduration=25.35418803 podStartE2EDuration="29.745851231s" podCreationTimestamp="2025-07-10 00:31:05 +0000 UTC" firstStartedPulling="2025-07-10 00:31:29.230528637 +0000 UTC m=+39.825609799" lastFinishedPulling="2025-07-10 00:31:33.622191838 +0000 UTC m=+44.217273000" observedRunningTime="2025-07-10 00:31:33.743053087 +0000 UTC m=+44.338134249" watchObservedRunningTime="2025-07-10 00:31:34.745851231 +0000 UTC m=+45.340932393" Jul 10 00:31:34.746208 kubelet[2481]: I0710 00:31:34.745968 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c98486c8f-5xk6f" podStartSLOduration=25.63980343 podStartE2EDuration="29.745963355s" podCreationTimestamp="2025-07-10 00:31:05 +0000 UTC" firstStartedPulling="2025-07-10 00:31:29.846105804 +0000 UTC m=+40.441186966" lastFinishedPulling="2025-07-10 00:31:33.952265769 +0000 UTC m=+44.547346891" observedRunningTime="2025-07-10 00:31:34.744652348 +0000 UTC m=+45.339733510" watchObservedRunningTime="2025-07-10 00:31:34.745963355 +0000 UTC m=+45.341044517" Jul 10 00:31:35.626440 containerd[1430]: time="2025-07-10T00:31:35.626264944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:35.636935 containerd[1430]: time="2025-07-10T00:31:35.636880072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 10 00:31:35.694785 systemd[1]: Started sshd@8-10.0.0.74:22-10.0.0.1:35514.service - OpenSSH per-connection server daemon (10.0.0.1:35514). Jul 10 00:31:35.710347 containerd[1430]: time="2025-07-10T00:31:35.710301974Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:35.732697 containerd[1430]: time="2025-07-10T00:31:35.731154736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:31:35.732697 containerd[1430]: time="2025-07-10T00:31:35.731924483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.77898589s" Jul 10 00:31:35.732697 containerd[1430]: time="2025-07-10T00:31:35.731958564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 10 00:31:35.748575 containerd[1430]: time="2025-07-10T00:31:35.748532298Z" level=info msg="CreateContainer within sandbox \"f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:31:35.778014 containerd[1430]: time="2025-07-10T00:31:35.777963957Z" level=info msg="CreateContainer within sandbox \"f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ac6e14a96d9818269ddb426bb4e9c6f8c12ee5cc681aba8c2cf32cab683b95de\"" Jul 10 00:31:35.779763 containerd[1430]: time="2025-07-10T00:31:35.779725298Z" level=info msg="StartContainer for \"ac6e14a96d9818269ddb426bb4e9c6f8c12ee5cc681aba8c2cf32cab683b95de\"" Jul 10 00:31:35.798574 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 35514 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:35.805212 sshd[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:35.811541 systemd-logind[1417]: New session 9 of user core. Jul 10 00:31:35.818235 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:31:35.822595 systemd[1]: Started cri-containerd-ac6e14a96d9818269ddb426bb4e9c6f8c12ee5cc681aba8c2cf32cab683b95de.scope - libcontainer container ac6e14a96d9818269ddb426bb4e9c6f8c12ee5cc681aba8c2cf32cab683b95de. Jul 10 00:31:35.868790 containerd[1430]: time="2025-07-10T00:31:35.868748741Z" level=info msg="StartContainer for \"ac6e14a96d9818269ddb426bb4e9c6f8c12ee5cc681aba8c2cf32cab683b95de\" returns successfully" Jul 10 00:31:36.317448 sshd[5413]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:36.320834 systemd[1]: sshd@8-10.0.0.74:22-10.0.0.1:35514.service: Deactivated successfully. Jul 10 00:31:36.324063 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:31:36.325384 systemd-logind[1417]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:31:36.327115 systemd-logind[1417]: Removed session 9. Jul 10 00:31:36.789161 kubelet[2481]: I0710 00:31:36.789086 2481 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-cdc7dc74d-2zddx" podStartSLOduration=23.086077288 podStartE2EDuration="27.789068906s" podCreationTimestamp="2025-07-10 00:31:09 +0000 UTC" firstStartedPulling="2025-07-10 00:31:31.030002102 +0000 UTC m=+41.625083264" lastFinishedPulling="2025-07-10 00:31:35.73299376 +0000 UTC m=+46.328074882" observedRunningTime="2025-07-10 00:31:36.751716038 +0000 UTC m=+47.346797200" watchObservedRunningTime="2025-07-10 00:31:36.789068906 +0000 UTC m=+47.384150068" Jul 10 00:31:41.327838 systemd[1]: Started sshd@9-10.0.0.74:22-10.0.0.1:35518.service - OpenSSH per-connection server daemon (10.0.0.1:35518). Jul 10 00:31:41.368578 sshd[5514]: Accepted publickey for core from 10.0.0.1 port 35518 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:41.370065 sshd[5514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:41.373718 systemd-logind[1417]: New session 10 of user core. Jul 10 00:31:41.380200 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:31:41.555131 sshd[5514]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:41.563612 systemd[1]: sshd@9-10.0.0.74:22-10.0.0.1:35518.service: Deactivated successfully. Jul 10 00:31:41.567972 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:31:41.569280 systemd-logind[1417]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:31:41.574389 systemd[1]: Started sshd@10-10.0.0.74:22-10.0.0.1:35524.service - OpenSSH per-connection server daemon (10.0.0.1:35524). Jul 10 00:31:41.575705 systemd-logind[1417]: Removed session 10. Jul 10 00:31:41.609158 sshd[5530]: Accepted publickey for core from 10.0.0.1 port 35524 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:41.611387 sshd[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:41.614972 systemd-logind[1417]: New session 11 of user core. Jul 10 00:31:41.623230 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:31:41.871903 sshd[5530]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:41.882952 systemd[1]: sshd@10-10.0.0.74:22-10.0.0.1:35524.service: Deactivated successfully. Jul 10 00:31:41.886965 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:31:41.894120 systemd-logind[1417]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:31:41.901504 systemd[1]: Started sshd@11-10.0.0.74:22-10.0.0.1:35534.service - OpenSSH per-connection server daemon (10.0.0.1:35534). Jul 10 00:31:41.904112 systemd-logind[1417]: Removed session 11. Jul 10 00:31:41.960525 sshd[5542]: Accepted publickey for core from 10.0.0.1 port 35534 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:41.961753 sshd[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:41.965725 systemd-logind[1417]: New session 12 of user core. Jul 10 00:31:41.980253 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:31:42.194094 sshd[5542]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:42.201679 systemd[1]: sshd@11-10.0.0.74:22-10.0.0.1:35534.service: Deactivated successfully. Jul 10 00:31:42.203809 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:31:42.204567 systemd-logind[1417]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:31:42.206508 systemd-logind[1417]: Removed session 12. Jul 10 00:31:47.208580 systemd[1]: Started sshd@12-10.0.0.74:22-10.0.0.1:58488.service - OpenSSH per-connection server daemon (10.0.0.1:58488). Jul 10 00:31:47.249098 sshd[5566]: Accepted publickey for core from 10.0.0.1 port 58488 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:47.249978 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:47.254741 systemd-logind[1417]: New session 13 of user core. Jul 10 00:31:47.271068 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:31:47.450031 sshd[5566]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:47.460744 systemd[1]: sshd@12-10.0.0.74:22-10.0.0.1:58488.service: Deactivated successfully. Jul 10 00:31:47.462444 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:31:47.464573 systemd-logind[1417]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:31:47.475695 systemd[1]: Started sshd@13-10.0.0.74:22-10.0.0.1:58498.service - OpenSSH per-connection server daemon (10.0.0.1:58498). Jul 10 00:31:47.477449 systemd-logind[1417]: Removed session 13. Jul 10 00:31:47.517382 sshd[5580]: Accepted publickey for core from 10.0.0.1 port 58498 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:47.518927 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:47.522447 systemd-logind[1417]: New session 14 of user core. Jul 10 00:31:47.527235 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:31:47.750383 sshd[5580]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:47.770852 systemd[1]: sshd@13-10.0.0.74:22-10.0.0.1:58498.service: Deactivated successfully. Jul 10 00:31:47.772503 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:31:47.774933 systemd-logind[1417]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:31:47.777086 systemd[1]: Started sshd@14-10.0.0.74:22-10.0.0.1:58508.service - OpenSSH per-connection server daemon (10.0.0.1:58508). Jul 10 00:31:47.777997 systemd-logind[1417]: Removed session 14. Jul 10 00:31:47.820368 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 58508 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:47.821721 sshd[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:47.825875 systemd-logind[1417]: New session 15 of user core. Jul 10 00:31:47.836306 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:31:48.575152 sshd[5593]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:48.583897 systemd[1]: sshd@14-10.0.0.74:22-10.0.0.1:58508.service: Deactivated successfully. Jul 10 00:31:48.589389 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:31:48.590895 systemd-logind[1417]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:31:48.600039 systemd[1]: Started sshd@15-10.0.0.74:22-10.0.0.1:58514.service - OpenSSH per-connection server daemon (10.0.0.1:58514). Jul 10 00:31:48.604113 systemd-logind[1417]: Removed session 15. Jul 10 00:31:48.644707 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 58514 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:48.647677 sshd[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:48.655918 systemd-logind[1417]: New session 16 of user core. Jul 10 00:31:48.661255 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:31:49.055687 sshd[5621]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:49.066014 systemd[1]: sshd@15-10.0.0.74:22-10.0.0.1:58514.service: Deactivated successfully. Jul 10 00:31:49.068435 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:31:49.071414 systemd-logind[1417]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:31:49.083380 systemd[1]: Started sshd@16-10.0.0.74:22-10.0.0.1:58524.service - OpenSSH per-connection server daemon (10.0.0.1:58524). Jul 10 00:31:49.084711 systemd-logind[1417]: Removed session 16. Jul 10 00:31:49.118389 sshd[5633]: Accepted publickey for core from 10.0.0.1 port 58524 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:49.119823 sshd[5633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:49.124321 systemd-logind[1417]: New session 17 of user core. Jul 10 00:31:49.134239 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:31:49.253885 sshd[5633]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:49.257552 systemd[1]: sshd@16-10.0.0.74:22-10.0.0.1:58524.service: Deactivated successfully. Jul 10 00:31:49.260966 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:31:49.261784 systemd-logind[1417]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:31:49.263199 systemd-logind[1417]: Removed session 17. Jul 10 00:31:49.473985 containerd[1430]: time="2025-07-10T00:31:49.473947141Z" level=info msg="StopPodSandbox for \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\"" Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.516 [WARNING][5657] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0", GenerateName:"calico-kube-controllers-cdc7dc74d-", Namespace:"calico-system", SelfLink:"", UID:"44306273-2b03-49e5-af4b-bfd726f65b5f", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cdc7dc74d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d", Pod:"calico-kube-controllers-cdc7dc74d-2zddx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b166d72bd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.517 [INFO][5657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.517 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" iface="eth0" netns="" Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.517 [INFO][5657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.517 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.539 [INFO][5669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" HandleID="k8s-pod-network.86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.539 [INFO][5669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.539 [INFO][5669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.550 [WARNING][5669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" HandleID="k8s-pod-network.86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.550 [INFO][5669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" HandleID="k8s-pod-network.86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.556 [INFO][5669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:49.560774 containerd[1430]: 2025-07-10 00:31:49.558 [INFO][5657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:49.560774 containerd[1430]: time="2025-07-10T00:31:49.560520970Z" level=info msg="TearDown network for sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\" successfully" Jul 10 00:31:49.560774 containerd[1430]: time="2025-07-10T00:31:49.560544611Z" level=info msg="StopPodSandbox for \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\" returns successfully" Jul 10 00:31:49.561493 containerd[1430]: time="2025-07-10T00:31:49.561020824Z" level=info msg="RemovePodSandbox for \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\"" Jul 10 00:31:49.571804 containerd[1430]: time="2025-07-10T00:31:49.571759006Z" level=info msg="Forcibly stopping sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\"" Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.608 [WARNING][5687] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0", GenerateName:"calico-kube-controllers-cdc7dc74d-", Namespace:"calico-system", SelfLink:"", UID:"44306273-2b03-49e5-af4b-bfd726f65b5f", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cdc7dc74d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f46dc2af6737e6c7d7b1bea76912da4041dea0e4af45908564c757feaac2d94d", Pod:"calico-kube-controllers-cdc7dc74d-2zddx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0b166d72bd8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.608 [INFO][5687] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.608 [INFO][5687] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" iface="eth0" netns="" Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.608 [INFO][5687] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.608 [INFO][5687] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.629 [INFO][5696] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" HandleID="k8s-pod-network.86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.629 [INFO][5696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.629 [INFO][5696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.638 [WARNING][5696] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" HandleID="k8s-pod-network.86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.638 [INFO][5696] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" HandleID="k8s-pod-network.86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Workload="localhost-k8s-calico--kube--controllers--cdc7dc74d--2zddx-eth0" Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.640 [INFO][5696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:49.643330 containerd[1430]: 2025-07-10 00:31:49.641 [INFO][5687] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8" Jul 10 00:31:49.643807 containerd[1430]: time="2025-07-10T00:31:49.643367855Z" level=info msg="TearDown network for sandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\" successfully" Jul 10 00:31:49.657527 containerd[1430]: time="2025-07-10T00:31:49.657464891Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:31:49.657629 containerd[1430]: time="2025-07-10T00:31:49.657555053Z" level=info msg="RemovePodSandbox \"86f9c2a7251cb132f77d3ddc82f55c087d8f814a88cf8a57fd8d7ceca43441f8\" returns successfully" Jul 10 00:31:49.658423 containerd[1430]: time="2025-07-10T00:31:49.658137270Z" level=info msg="StopPodSandbox for \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\"" Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.691 [WARNING][5715] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8l5v7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08557a49-6fcf-4236-a001-85a4edaa7064", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f", Pod:"csi-node-driver-8l5v7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid31ece3a92d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.692 [INFO][5715] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.692 [INFO][5715] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" iface="eth0" netns="" Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.692 [INFO][5715] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.692 [INFO][5715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.710 [INFO][5724] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" HandleID="k8s-pod-network.81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.710 [INFO][5724] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.710 [INFO][5724] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.719 [WARNING][5724] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" HandleID="k8s-pod-network.81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.719 [INFO][5724] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" HandleID="k8s-pod-network.81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.720 [INFO][5724] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:49.723956 containerd[1430]: 2025-07-10 00:31:49.721 [INFO][5715] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:49.725239 containerd[1430]: time="2025-07-10T00:31:49.724430130Z" level=info msg="TearDown network for sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\" successfully" Jul 10 00:31:49.725239 containerd[1430]: time="2025-07-10T00:31:49.724460731Z" level=info msg="StopPodSandbox for \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\" returns successfully" Jul 10 00:31:49.725239 containerd[1430]: time="2025-07-10T00:31:49.724892143Z" level=info msg="RemovePodSandbox for \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\"" Jul 10 00:31:49.725239 containerd[1430]: time="2025-07-10T00:31:49.724920304Z" level=info msg="Forcibly stopping sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\"" Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.759 [WARNING][5741] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8l5v7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08557a49-6fcf-4236-a001-85a4edaa7064", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a137edae990155a615ddfe316524c2ad9dd769ec06101a0b10a9c48dd8a7b2f", Pod:"csi-node-driver-8l5v7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid31ece3a92d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.760 [INFO][5741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.760 [INFO][5741] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" iface="eth0" netns="" Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.760 [INFO][5741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.760 [INFO][5741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.785 [INFO][5750] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" HandleID="k8s-pod-network.81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.785 [INFO][5750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.785 [INFO][5750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.795 [WARNING][5750] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" HandleID="k8s-pod-network.81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.795 [INFO][5750] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" HandleID="k8s-pod-network.81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Workload="localhost-k8s-csi--node--driver--8l5v7-eth0" Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.796 [INFO][5750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:49.801660 containerd[1430]: 2025-07-10 00:31:49.799 [INFO][5741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b" Jul 10 00:31:49.803478 containerd[1430]: time="2025-07-10T00:31:49.802001307Z" level=info msg="TearDown network for sandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\" successfully" Jul 10 00:31:49.808199 containerd[1430]: time="2025-07-10T00:31:49.808099598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:31:49.808199 containerd[1430]: time="2025-07-10T00:31:49.808166440Z" level=info msg="RemovePodSandbox \"81bf9e74c7cb2b64b224cf6179e016f03e8b70ac9187945071e55549cf0f506b\" returns successfully" Jul 10 00:31:49.809403 containerd[1430]: time="2025-07-10T00:31:49.808926102Z" level=info msg="StopPodSandbox for \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\"" Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.846 [WARNING][5767] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--qztlf-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"cd45c6d9-27eb-494a-9d3f-a28a02a70496", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9", Pod:"goldmane-768f4c5c69-qztlf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif846b836c13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.846 [INFO][5767] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.846 [INFO][5767] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" iface="eth0" netns="" Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.847 [INFO][5767] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.847 [INFO][5767] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.865 [INFO][5776] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" HandleID="k8s-pod-network.92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.866 [INFO][5776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.866 [INFO][5776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.875 [WARNING][5776] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" HandleID="k8s-pod-network.92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.875 [INFO][5776] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" HandleID="k8s-pod-network.92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.877 [INFO][5776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:49.880298 containerd[1430]: 2025-07-10 00:31:49.878 [INFO][5767] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:49.880898 containerd[1430]: time="2025-07-10T00:31:49.880746757Z" level=info msg="TearDown network for sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\" successfully" Jul 10 00:31:49.880898 containerd[1430]: time="2025-07-10T00:31:49.880781958Z" level=info msg="StopPodSandbox for \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\" returns successfully" Jul 10 00:31:49.881321 containerd[1430]: time="2025-07-10T00:31:49.881293213Z" level=info msg="RemovePodSandbox for \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\"" Jul 10 00:31:49.881428 containerd[1430]: time="2025-07-10T00:31:49.881328894Z" level=info msg="Forcibly stopping sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\"" Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.919 [WARNING][5794] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--qztlf-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"cd45c6d9-27eb-494a-9d3f-a28a02a70496", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"931c6fd8a1d72c84c0b13c22412c9ac2edd6398ff462603b9729a8e4cdac0bb9", Pod:"goldmane-768f4c5c69-qztlf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif846b836c13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.919 [INFO][5794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.919 [INFO][5794] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" iface="eth0" netns="" Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.919 [INFO][5794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.919 [INFO][5794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.940 [INFO][5802] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" HandleID="k8s-pod-network.92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.940 [INFO][5802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.940 [INFO][5802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.948 [WARNING][5802] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" HandleID="k8s-pod-network.92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.948 [INFO][5802] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" HandleID="k8s-pod-network.92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Workload="localhost-k8s-goldmane--768f4c5c69--qztlf-eth0" Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.950 [INFO][5802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:49.953660 containerd[1430]: 2025-07-10 00:31:49.952 [INFO][5794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9" Jul 10 00:31:49.954178 containerd[1430]: time="2025-07-10T00:31:49.953693644Z" level=info msg="TearDown network for sandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\" successfully" Jul 10 00:31:49.956759 containerd[1430]: time="2025-07-10T00:31:49.956730010Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:31:49.956822 containerd[1430]: time="2025-07-10T00:31:49.956786531Z" level=info msg="RemovePodSandbox \"92a714a38f0c05c9007173a1474d9c004b397a78eb0977be19aedce68e359ad9\" returns successfully" Jul 10 00:31:49.957297 containerd[1430]: time="2025-07-10T00:31:49.957275305Z" level=info msg="StopPodSandbox for \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\"" Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:49.992 [WARNING][5820] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" WorkloadEndpoint="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:49.992 [INFO][5820] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:49.992 [INFO][5820] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" iface="eth0" netns="" Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:49.992 [INFO][5820] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:49.992 [INFO][5820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:50.017 [INFO][5829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" HandleID="k8s-pod-network.44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Workload="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:50.017 [INFO][5829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:50.017 [INFO][5829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:50.026 [WARNING][5829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" HandleID="k8s-pod-network.44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Workload="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:50.026 [INFO][5829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" HandleID="k8s-pod-network.44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Workload="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:50.027 [INFO][5829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:50.031127 containerd[1430]: 2025-07-10 00:31:50.029 [INFO][5820] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:50.031127 containerd[1430]: time="2025-07-10T00:31:50.031081448Z" level=info msg="TearDown network for sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\" successfully" Jul 10 00:31:50.031127 containerd[1430]: time="2025-07-10T00:31:50.031107809Z" level=info msg="StopPodSandbox for \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\" returns successfully" Jul 10 00:31:50.032814 containerd[1430]: time="2025-07-10T00:31:50.032064395Z" level=info msg="RemovePodSandbox for \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\"" Jul 10 00:31:50.032814 containerd[1430]: time="2025-07-10T00:31:50.032208119Z" level=info msg="Forcibly stopping sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\"" Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.065 [WARNING][5847] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" WorkloadEndpoint="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.065 [INFO][5847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.065 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" iface="eth0" netns="" Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.065 [INFO][5847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.065 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.084 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" HandleID="k8s-pod-network.44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Workload="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.084 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.084 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.094 [WARNING][5856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" HandleID="k8s-pod-network.44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Workload="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.094 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" HandleID="k8s-pod-network.44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Workload="localhost-k8s-whisker--848d798b6--7lz4s-eth0" Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.095 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:50.099039 containerd[1430]: 2025-07-10 00:31:50.097 [INFO][5847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a" Jul 10 00:31:50.100067 containerd[1430]: time="2025-07-10T00:31:50.099468108Z" level=info msg="TearDown network for sandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\" successfully" Jul 10 00:31:50.102669 containerd[1430]: time="2025-07-10T00:31:50.102624756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:31:50.102759 containerd[1430]: time="2025-07-10T00:31:50.102687358Z" level=info msg="RemovePodSandbox \"44a8e8f9ba88e8bcbe3b50c1de6d60d0984c7bad21be6b8b4ac91809fa071a1a\" returns successfully" Jul 10 00:31:50.105736 containerd[1430]: time="2025-07-10T00:31:50.105683161Z" level=info msg="StopPodSandbox for \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\"" Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.142 [WARNING][5874] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c30ef12d-7fea-496b-86fe-53d8caa8bd6a", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 30, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f", Pod:"coredns-674b8bbfcf-lhmtn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09ecdac462f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.142 [INFO][5874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.142 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" iface="eth0" netns="" Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.142 [INFO][5874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.142 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.165 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" HandleID="k8s-pod-network.5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.166 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.166 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.175 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" HandleID="k8s-pod-network.5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.175 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" HandleID="k8s-pod-network.5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.176 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:50.179639 containerd[1430]: 2025-07-10 00:31:50.177 [INFO][5874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:50.179639 containerd[1430]: time="2025-07-10T00:31:50.179515212Z" level=info msg="TearDown network for sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\" successfully" Jul 10 00:31:50.179639 containerd[1430]: time="2025-07-10T00:31:50.179540853Z" level=info msg="StopPodSandbox for \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\" returns successfully" Jul 10 00:31:50.180592 containerd[1430]: time="2025-07-10T00:31:50.180287594Z" level=info msg="RemovePodSandbox for \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\"" Jul 10 00:31:50.180592 containerd[1430]: time="2025-07-10T00:31:50.180328435Z" level=info msg="Forcibly stopping sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\"" Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.214 [WARNING][5901] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c30ef12d-7fea-496b-86fe-53d8caa8bd6a", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 30, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eaa141a775cf39e008b94218626f7bb0c69edf934ca69fc56a69285971a6b03f", Pod:"coredns-674b8bbfcf-lhmtn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09ecdac462f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.215 [INFO][5901] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.215 [INFO][5901] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" iface="eth0" netns="" Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.215 [INFO][5901] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.215 [INFO][5901] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.232 [INFO][5910] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" HandleID="k8s-pod-network.5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.233 [INFO][5910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.233 [INFO][5910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.242 [WARNING][5910] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" HandleID="k8s-pod-network.5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.242 [INFO][5910] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" HandleID="k8s-pod-network.5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Workload="localhost-k8s-coredns--674b8bbfcf--lhmtn-eth0" Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.245 [INFO][5910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:50.248818 containerd[1430]: 2025-07-10 00:31:50.247 [INFO][5901] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0" Jul 10 00:31:50.250021 containerd[1430]: time="2025-07-10T00:31:50.248930341Z" level=info msg="TearDown network for sandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\" successfully" Jul 10 00:31:50.264731 containerd[1430]: time="2025-07-10T00:31:50.264688379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:31:50.264941 containerd[1430]: time="2025-07-10T00:31:50.264921145Z" level=info msg="RemovePodSandbox \"5903710892a76fc121630d6ce7abb2cf244ca842e8200d70b6c274e4144e3ae0\" returns successfully" Jul 10 00:31:50.265506 containerd[1430]: time="2025-07-10T00:31:50.265486201Z" level=info msg="StopPodSandbox for \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\"" Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.303 [WARNING][5928] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0", GenerateName:"calico-apiserver-c98486c8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"fde35633-5bd3-4224-9472-f70c96f585a5", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c98486c8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392", Pod:"calico-apiserver-c98486c8f-n2v22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7fe9c4d7af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.303 [INFO][5928] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.303 [INFO][5928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" iface="eth0" netns="" Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.303 [INFO][5928] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.303 [INFO][5928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.326 [INFO][5936] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" HandleID="k8s-pod-network.21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.326 [INFO][5936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.327 [INFO][5936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.337 [WARNING][5936] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" HandleID="k8s-pod-network.21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.337 [INFO][5936] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" HandleID="k8s-pod-network.21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.339 [INFO][5936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:50.343028 containerd[1430]: 2025-07-10 00:31:50.341 [INFO][5928] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:50.343028 containerd[1430]: time="2025-07-10T00:31:50.342907552Z" level=info msg="TearDown network for sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\" successfully" Jul 10 00:31:50.343028 containerd[1430]: time="2025-07-10T00:31:50.342933073Z" level=info msg="StopPodSandbox for \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\" returns successfully" Jul 10 00:31:50.343451 containerd[1430]: time="2025-07-10T00:31:50.343345924Z" level=info msg="RemovePodSandbox for \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\"" Jul 10 00:31:50.343451 containerd[1430]: time="2025-07-10T00:31:50.343377925Z" level=info msg="Forcibly stopping sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\"" Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.376 [WARNING][5954] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0", GenerateName:"calico-apiserver-c98486c8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"fde35633-5bd3-4224-9472-f70c96f585a5", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c98486c8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"98a8b8cc3943b1842db5b66a96853b690002347b0bc635037db2229fbce07392", Pod:"calico-apiserver-c98486c8f-n2v22", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic7fe9c4d7af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.376 [INFO][5954] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.376 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" iface="eth0" netns="" Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.376 [INFO][5954] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.376 [INFO][5954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.395 [INFO][5963] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" HandleID="k8s-pod-network.21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.395 [INFO][5963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.395 [INFO][5963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.404 [WARNING][5963] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" HandleID="k8s-pod-network.21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.404 [INFO][5963] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" HandleID="k8s-pod-network.21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Workload="localhost-k8s-calico--apiserver--c98486c8f--n2v22-eth0" Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.405 [INFO][5963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:50.408608 containerd[1430]: 2025-07-10 00:31:50.407 [INFO][5954] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5" Jul 10 00:31:50.409183 containerd[1430]: time="2025-07-10T00:31:50.408650259Z" level=info msg="TearDown network for sandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\" successfully" Jul 10 00:31:50.417171 containerd[1430]: time="2025-07-10T00:31:50.417114094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:31:50.417244 containerd[1430]: time="2025-07-10T00:31:50.417227297Z" level=info msg="RemovePodSandbox \"21dea9dec872563e94dfd77b153825c6b8e2e84cfd24aa75d8e6ef09d2b25ca5\" returns successfully" Jul 10 00:31:50.417981 containerd[1430]: time="2025-07-10T00:31:50.417678830Z" level=info msg="StopPodSandbox for \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\"" Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.465 [WARNING][5983] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0", GenerateName:"calico-apiserver-c98486c8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"42229c1c-0bab-4152-99c9-e48ab9263892", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c98486c8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446", Pod:"calico-apiserver-c98486c8f-5xk6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid854274c524", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.465 [INFO][5983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.465 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" iface="eth0" netns="" Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.465 [INFO][5983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.465 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.483 [INFO][5992] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" HandleID="k8s-pod-network.259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.483 [INFO][5992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.483 [INFO][5992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.492 [WARNING][5992] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" HandleID="k8s-pod-network.259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.492 [INFO][5992] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" HandleID="k8s-pod-network.259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.493 [INFO][5992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:50.497073 containerd[1430]: 2025-07-10 00:31:50.495 [INFO][5983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:50.497933 containerd[1430]: time="2025-07-10T00:31:50.497119717Z" level=info msg="TearDown network for sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\" successfully" Jul 10 00:31:50.497933 containerd[1430]: time="2025-07-10T00:31:50.497143798Z" level=info msg="StopPodSandbox for \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\" returns successfully" Jul 10 00:31:50.498391 containerd[1430]: time="2025-07-10T00:31:50.498113625Z" level=info msg="RemovePodSandbox for \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\"" Jul 10 00:31:50.498391 containerd[1430]: time="2025-07-10T00:31:50.498147066Z" level=info msg="Forcibly stopping sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\"" Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.531 [WARNING][6009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0", GenerateName:"calico-apiserver-c98486c8f-", Namespace:"calico-apiserver", SelfLink:"", UID:"42229c1c-0bab-4152-99c9-e48ab9263892", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 31, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c98486c8f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b25080e78627d87e10314068e466cbacdaaa34776488dd7536d978f2e488e446", Pod:"calico-apiserver-c98486c8f-5xk6f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid854274c524", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.531 [INFO][6009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.531 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" iface="eth0" netns="" Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.531 [INFO][6009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.531 [INFO][6009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.555 [INFO][6018] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" HandleID="k8s-pod-network.259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.555 [INFO][6018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.555 [INFO][6018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.566 [WARNING][6018] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" HandleID="k8s-pod-network.259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.566 [INFO][6018] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" HandleID="k8s-pod-network.259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Workload="localhost-k8s-calico--apiserver--c98486c8f--5xk6f-eth0" Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.569 [INFO][6018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:50.572905 containerd[1430]: 2025-07-10 00:31:50.571 [INFO][6009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6" Jul 10 00:31:50.573837 containerd[1430]: time="2025-07-10T00:31:50.573439478Z" level=info msg="TearDown network for sandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\" successfully" Jul 10 00:31:50.576787 containerd[1430]: time="2025-07-10T00:31:50.576637046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:31:50.576787 containerd[1430]: time="2025-07-10T00:31:50.576700488Z" level=info msg="RemovePodSandbox \"259ab8bac1722b4c03f2dd96f63b72c47cd5e55888bcd1bbdbfccb66fe46b5b6\" returns successfully" Jul 10 00:31:50.577318 containerd[1430]: time="2025-07-10T00:31:50.577291625Z" level=info msg="StopPodSandbox for \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\"" Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.611 [WARNING][6036] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c3848646-0b29-4349-9a03-f64c3a70a1ee", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 30, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c", Pod:"coredns-674b8bbfcf-4wp2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2d91cdfaa3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.612 [INFO][6036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.612 [INFO][6036] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" iface="eth0" netns="" Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.612 [INFO][6036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.612 [INFO][6036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.632 [INFO][6044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" HandleID="k8s-pod-network.4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.632 [INFO][6044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.632 [INFO][6044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.642 [WARNING][6044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" HandleID="k8s-pod-network.4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.642 [INFO][6044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" HandleID="k8s-pod-network.4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.643 [INFO][6044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:50.647030 containerd[1430]: 2025-07-10 00:31:50.645 [INFO][6036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:50.647030 containerd[1430]: time="2025-07-10T00:31:50.646998081Z" level=info msg="TearDown network for sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\" successfully" Jul 10 00:31:50.647030 containerd[1430]: time="2025-07-10T00:31:50.647021562Z" level=info msg="StopPodSandbox for \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\" returns successfully" Jul 10 00:31:50.649241 containerd[1430]: time="2025-07-10T00:31:50.648920295Z" level=info msg="RemovePodSandbox for \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\"" Jul 10 00:31:50.649241 containerd[1430]: time="2025-07-10T00:31:50.648954256Z" level=info msg="Forcibly stopping sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\"" Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.680 [WARNING][6063] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c3848646-0b29-4349-9a03-f64c3a70a1ee", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 30, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7a64a918d87d107e81ac00a717c5cfd438e79cd3de3e40d2051787a212e492c", Pod:"coredns-674b8bbfcf-4wp2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib2d91cdfaa3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.680 [INFO][6063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.680 [INFO][6063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" iface="eth0" netns="" Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.680 [INFO][6063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.680 [INFO][6063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.697 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" HandleID="k8s-pod-network.4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.698 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.698 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.707 [WARNING][6072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" HandleID="k8s-pod-network.4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.707 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" HandleID="k8s-pod-network.4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Workload="localhost-k8s-coredns--674b8bbfcf--4wp2k-eth0" Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.709 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:31:50.712753 containerd[1430]: 2025-07-10 00:31:50.711 [INFO][6063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba" Jul 10 00:31:50.713617 containerd[1430]: time="2025-07-10T00:31:50.712855551Z" level=info msg="TearDown network for sandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\" successfully" Jul 10 00:31:50.716513 containerd[1430]: time="2025-07-10T00:31:50.716469292Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:31:50.716697 containerd[1430]: time="2025-07-10T00:31:50.716613376Z" level=info msg="RemovePodSandbox \"4738d321f17538eeb34e7316793bc00f30bccec12185fd898aa62890aeec46ba\" returns successfully" Jul 10 00:31:54.267928 systemd[1]: Started sshd@17-10.0.0.74:22-10.0.0.1:58336.service - OpenSSH per-connection server daemon (10.0.0.1:58336). Jul 10 00:31:54.314279 sshd[6085]: Accepted publickey for core from 10.0.0.1 port 58336 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:54.315667 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:54.320401 systemd-logind[1417]: New session 18 of user core. Jul 10 00:31:54.335242 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:31:54.512913 sshd[6085]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:54.517068 systemd[1]: sshd@17-10.0.0.74:22-10.0.0.1:58336.service: Deactivated successfully. Jul 10 00:31:54.519227 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:31:54.520311 systemd-logind[1417]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:31:54.521476 systemd-logind[1417]: Removed session 18. Jul 10 00:31:59.539302 systemd[1]: Started sshd@18-10.0.0.74:22-10.0.0.1:58344.service - OpenSSH per-connection server daemon (10.0.0.1:58344). Jul 10 00:31:59.572943 sshd[6105]: Accepted publickey for core from 10.0.0.1 port 58344 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:31:59.574330 sshd[6105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:31:59.581288 systemd-logind[1417]: New session 19 of user core. Jul 10 00:31:59.591265 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:31:59.963708 sshd[6105]: pam_unix(sshd:session): session closed for user core Jul 10 00:31:59.967417 systemd[1]: sshd@18-10.0.0.74:22-10.0.0.1:58344.service: Deactivated successfully. Jul 10 00:31:59.969628 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:31:59.971325 systemd-logind[1417]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:31:59.972422 systemd-logind[1417]: Removed session 19. Jul 10 00:32:04.751422 systemd[1]: run-containerd-runc-k8s.io-f9973f77b182947f8aab4ad708fe3b61a5637e89908e75d7c7b9cd911e3d8321-runc.TqObJI.mount: Deactivated successfully. Jul 10 00:32:04.979076 systemd[1]: Started sshd@19-10.0.0.74:22-10.0.0.1:51914.service - OpenSSH per-connection server daemon (10.0.0.1:51914). Jul 10 00:32:05.026619 sshd[6190]: Accepted publickey for core from 10.0.0.1 port 51914 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:32:05.028295 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:32:05.032222 systemd-logind[1417]: New session 20 of user core. Jul 10 00:32:05.042265 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:32:05.361662 sshd[6190]: pam_unix(sshd:session): session closed for user core Jul 10 00:32:05.365308 systemd[1]: sshd@19-10.0.0.74:22-10.0.0.1:51914.service: Deactivated successfully. Jul 10 00:32:05.367399 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:32:05.369541 systemd-logind[1417]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:32:05.370749 systemd-logind[1417]: Removed session 20.