Jul 14 21:59:54.905683 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:59:54.905705 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jul 14 20:26:44 -00 2025 Jul 14 21:59:54.905715 kernel: KASLR enabled Jul 14 21:59:54.905721 kernel: efi: EFI v2.7 by EDK II Jul 14 21:59:54.905727 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 14 21:59:54.905733 kernel: random: crng init done Jul 14 21:59:54.905741 kernel: ACPI: Early table checksum verification disabled Jul 14 21:59:54.905747 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 14 21:59:54.905754 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:59:54.905762 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:59:54.905769 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:59:54.905775 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:59:54.905781 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:59:54.905788 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:59:54.905796 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:59:54.905804 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:59:54.905811 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:59:54.905818 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:59:54.905824 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:59:54.905831 kernel: NUMA: Failed to initialise from firmware Jul 14 21:59:54.905838 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:59:54.905845 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 14 21:59:54.905851 kernel: Zone ranges: Jul 14 21:59:54.905858 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:59:54.905865 kernel: DMA32 empty Jul 14 21:59:54.905873 kernel: Normal empty Jul 14 21:59:54.905880 kernel: Movable zone start for each node Jul 14 21:59:54.905886 kernel: Early memory node ranges Jul 14 21:59:54.905893 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 14 21:59:54.905900 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 14 21:59:54.905907 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 14 21:59:54.905914 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 14 21:59:54.905921 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 14 21:59:54.905928 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 14 21:59:54.905935 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 14 21:59:54.905941 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:59:54.905948 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:59:54.905956 kernel: psci: probing for conduit method from ACPI. Jul 14 21:59:54.905963 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:59:54.905970 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:59:54.905979 kernel: psci: Trusted OS migration not required Jul 14 21:59:54.905987 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:59:54.905994 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:59:54.906003 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 14 21:59:54.906010 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 14 21:59:54.906017 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:59:54.906025 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:59:54.906032 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:59:54.906039 kernel: CPU features: detected: Hardware dirty bit management Jul 14 21:59:54.906046 kernel: CPU features: detected: Spectre-v4 Jul 14 21:59:54.906053 kernel: CPU features: detected: Spectre-BHB Jul 14 21:59:54.906061 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:59:54.906068 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:59:54.906077 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:59:54.906084 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:59:54.906091 kernel: alternatives: applying boot alternatives Jul 14 21:59:54.906099 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 21:59:54.906107 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:59:54.906114 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:59:54.906121 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:59:54.906128 kernel: Fallback order for Node 0: 0 Jul 14 21:59:54.906135 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 21:59:54.906142 kernel: Policy zone: DMA Jul 14 21:59:54.906149 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:59:54.906158 kernel: software IO TLB: area num 4. Jul 14 21:59:54.906165 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 14 21:59:54.906173 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 14 21:59:54.906180 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:59:54.906187 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:59:54.906195 kernel: rcu: RCU event tracing is enabled. Jul 14 21:59:54.906202 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:59:54.906210 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:59:54.906217 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:59:54.906224 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:59:54.906231 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:59:54.906238 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:59:54.906247 kernel: GICv3: 256 SPIs implemented Jul 14 21:59:54.906254 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:59:54.906261 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:59:54.906268 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 14 21:59:54.906276 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:59:54.906283 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:59:54.906290 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:59:54.906298 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:59:54.906305 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 14 21:59:54.906312 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 14 21:59:54.906319 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 21:59:54.906328 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:59:54.906335 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:59:54.906342 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:59:54.906350 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:59:54.906357 kernel: arm-pv: using stolen time PV Jul 14 21:59:54.906364 kernel: Console: colour dummy device 80x25 Jul 14 21:59:54.906372 kernel: ACPI: Core revision 20230628 Jul 14 21:59:54.906379 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:59:54.906387 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:59:54.906394 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 21:59:54.906403 kernel: landlock: Up and running. Jul 14 21:59:54.906410 kernel: SELinux: Initializing. Jul 14 21:59:54.906417 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:59:54.906425 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:59:54.906433 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:59:54.906441 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:59:54.906448 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:59:54.906468 kernel: rcu: Max phase no-delay instances is 400. Jul 14 21:59:54.906476 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 21:59:54.906498 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 21:59:54.906506 kernel: Remapping and enabling EFI services. Jul 14 21:59:54.906514 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:59:54.906521 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:59:54.906529 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:59:54.906537 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 14 21:59:54.906544 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:59:54.906552 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:59:54.906560 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:59:54.906574 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:59:54.906584 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 14 21:59:54.906592 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:59:54.906605 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:59:54.906616 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:59:54.906623 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:59:54.906631 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 14 21:59:54.906639 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:59:54.906646 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:59:54.906654 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:59:54.906663 kernel: SMP: Total of 4 processors activated. Jul 14 21:59:54.906672 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:59:54.906679 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:59:54.906687 kernel: CPU features: detected: Common not Private translations Jul 14 21:59:54.906695 kernel: CPU features: detected: CRC32 instructions Jul 14 21:59:54.906702 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 14 21:59:54.906710 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:59:54.906718 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:59:54.906727 kernel: CPU features: detected: Privileged Access Never Jul 14 21:59:54.906734 kernel: CPU features: detected: RAS Extension Support Jul 14 21:59:54.906742 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:59:54.906750 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:59:54.906758 kernel: alternatives: applying system-wide alternatives Jul 14 21:59:54.906765 kernel: devtmpfs: initialized Jul 14 21:59:54.906773 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:59:54.906781 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:59:54.906788 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:59:54.906797 kernel: SMBIOS 3.0.0 present. Jul 14 21:59:54.906805 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 14 21:59:54.906813 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:59:54.906821 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:59:54.906829 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:59:54.906836 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:59:54.906844 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:59:54.906852 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 14 21:59:54.906859 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:59:54.906868 kernel: cpuidle: using governor menu Jul 14 21:59:54.906876 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:59:54.906884 kernel: ASID allocator initialised with 32768 entries Jul 14 21:59:54.906891 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:59:54.906899 kernel: Serial: AMBA PL011 UART driver Jul 14 21:59:54.906907 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 14 21:59:54.906915 kernel: Modules: 0 pages in range for non-PLT usage Jul 14 21:59:54.906922 kernel: Modules: 509008 pages in range for PLT usage Jul 14 21:59:54.906930 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:59:54.906939 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 21:59:54.906947 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:59:54.906955 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 14 21:59:54.906963 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:59:54.906971 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 21:59:54.906978 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:59:54.906986 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 14 21:59:54.906994 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:59:54.907001 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:59:54.907010 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:59:54.907018 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:59:54.907026 kernel: ACPI: Interpreter enabled Jul 14 21:59:54.907033 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:59:54.907041 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:59:54.907049 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:59:54.907056 kernel: printk: console [ttyAMA0] enabled Jul 14 21:59:54.907064 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:59:54.907225 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:59:54.907309 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:59:54.907379 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:59:54.907449 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:59:54.907624 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:59:54.907636 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:59:54.907644 kernel: PCI host bridge to bus 0000:00 Jul 14 21:59:54.907719 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:59:54.907786 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:59:54.907848 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:59:54.907908 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:59:54.907995 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 21:59:54.908147 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:59:54.908246 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 21:59:54.908325 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 21:59:54.908396 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:59:54.908499 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:59:54.908597 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 21:59:54.908683 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 21:59:54.908771 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:59:54.908833 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:59:54.908924 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:59:54.908936 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:59:54.908944 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:59:54.908952 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:59:54.908960 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:59:54.908968 kernel: iommu: Default domain type: Translated Jul 14 21:59:54.908975 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:59:54.908983 kernel: efivars: Registered efivars operations Jul 14 21:59:54.908993 kernel: vgaarb: loaded Jul 14 21:59:54.909001 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:59:54.909009 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:59:54.909017 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:59:54.909024 kernel: pnp: PnP ACPI init Jul 14 21:59:54.909119 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:59:54.909131 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:59:54.909139 kernel: NET: Registered PF_INET protocol family Jul 14 21:59:54.909147 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:59:54.909158 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:59:54.909166 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:59:54.909174 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:59:54.909182 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 21:59:54.909190 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:59:54.909197 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:59:54.909205 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:59:54.909213 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:59:54.909223 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:59:54.909231 kernel: kvm [1]: HYP mode not available Jul 14 21:59:54.909239 kernel: Initialise system trusted keyrings Jul 14 21:59:54.909246 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:59:54.909254 kernel: Key type asymmetric registered Jul 14 21:59:54.909262 kernel: Asymmetric key parser 'x509' registered Jul 14 21:59:54.909269 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 14 21:59:54.909277 kernel: io scheduler mq-deadline registered Jul 14 21:59:54.909284 kernel: io scheduler kyber registered Jul 14 21:59:54.909292 kernel: io scheduler bfq registered Jul 14 21:59:54.909302 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:59:54.909309 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:59:54.909317 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:59:54.909387 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:59:54.909398 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:59:54.909406 kernel: thunder_xcv, ver 1.0 Jul 14 21:59:54.909414 kernel: thunder_bgx, ver 1.0 Jul 14 21:59:54.909422 kernel: nicpf, ver 1.0 Jul 14 21:59:54.909429 kernel: nicvf, ver 1.0 Jul 14 21:59:54.909529 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:59:54.909609 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:59:54 UTC (1752530394) Jul 14 21:59:54.909620 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:59:54.909628 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 21:59:54.909636 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 14 21:59:54.909644 kernel: watchdog: Hard watchdog permanently disabled Jul 14 21:59:54.909652 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:59:54.909660 kernel: Segment Routing with IPv6 Jul 14 21:59:54.909670 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:59:54.909678 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:59:54.909685 kernel: Key type dns_resolver registered Jul 14 21:59:54.909693 kernel: registered taskstats version 1 Jul 14 21:59:54.909701 kernel: Loading compiled-in X.509 certificates Jul 14 21:59:54.909709 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: 0878f879bf0f15203fd920e9f7d6346db298c301' Jul 14 21:59:54.909716 kernel: Key type .fscrypt registered Jul 14 21:59:54.909724 kernel: Key type fscrypt-provisioning registered Jul 14 21:59:54.909732 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:59:54.909741 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:59:54.909749 kernel: ima: No architecture policies found Jul 14 21:59:54.909757 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:59:54.909764 kernel: clk: Disabling unused clocks Jul 14 21:59:54.909772 kernel: Freeing unused kernel memory: 39424K Jul 14 21:59:54.909780 kernel: Run /init as init process Jul 14 21:59:54.909787 kernel: with arguments: Jul 14 21:59:54.909795 kernel: /init Jul 14 21:59:54.909802 kernel: with environment: Jul 14 21:59:54.909811 kernel: HOME=/ Jul 14 21:59:54.909819 kernel: TERM=linux Jul 14 21:59:54.909827 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:59:54.909836 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 21:59:54.909846 systemd[1]: Detected virtualization kvm. Jul 14 21:59:54.909854 systemd[1]: Detected architecture arm64. Jul 14 21:59:54.909862 systemd[1]: Running in initrd. Jul 14 21:59:54.909872 systemd[1]: No hostname configured, using default hostname. Jul 14 21:59:54.909880 systemd[1]: Hostname set to . Jul 14 21:59:54.909889 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:59:54.909897 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:59:54.909905 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:59:54.909917 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:59:54.909926 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 21:59:54.909934 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:59:54.909944 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 21:59:54.909953 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 21:59:54.909963 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 21:59:54.909972 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 21:59:54.909980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:59:54.909989 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:59:54.909997 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:59:54.910007 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:59:54.910015 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:59:54.910024 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:59:54.910032 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:59:54.910041 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:59:54.910053 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 21:59:54.910062 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 21:59:54.910072 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:59:54.910081 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:59:54.910094 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:59:54.910103 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:59:54.910112 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 21:59:54.910120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:59:54.910129 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 21:59:54.910137 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:59:54.910146 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:59:54.910154 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:59:54.910164 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:59:54.910173 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 21:59:54.910181 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:59:54.910189 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:59:54.910198 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:59:54.910209 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:59:54.910217 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:59:54.910243 systemd-journald[238]: Collecting audit messages is disabled. Jul 14 21:59:54.910264 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:59:54.910273 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:59:54.910282 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:59:54.910290 systemd-journald[238]: Journal started Jul 14 21:59:54.910309 systemd-journald[238]: Runtime Journal (/run/log/journal/45bd0cb895414fc4a594f88e0a4fafdb) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:59:54.895886 systemd-modules-load[240]: Inserted module 'overlay' Jul 14 21:59:54.911675 kernel: Bridge firewalling registered Jul 14 21:59:54.912692 systemd-modules-load[240]: Inserted module 'br_netfilter' Jul 14 21:59:54.913923 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:59:54.914932 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:59:54.915882 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:59:54.920044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:59:54.921339 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:59:54.931530 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:59:54.932650 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:59:54.934368 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:59:54.937068 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 21:59:54.938979 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:59:54.950503 dracut-cmdline[277]: dracut-dracut-053 Jul 14 21:59:54.952668 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 21:59:54.967140 systemd-resolved[278]: Positive Trust Anchors: Jul 14 21:59:54.967158 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:59:54.967189 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:59:54.971878 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 14 21:59:54.972865 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:59:54.973947 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:59:55.017481 kernel: SCSI subsystem initialized Jul 14 21:59:55.022477 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:59:55.029489 kernel: iscsi: registered transport (tcp) Jul 14 21:59:55.043516 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:59:55.043531 kernel: QLogic iSCSI HBA Driver Jul 14 21:59:55.085956 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 21:59:55.100687 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 21:59:55.115475 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:59:55.115516 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:59:55.115528 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 21:59:55.163508 kernel: raid6: neonx8 gen() 15776 MB/s Jul 14 21:59:55.180481 kernel: raid6: neonx4 gen() 15663 MB/s Jul 14 21:59:55.197470 kernel: raid6: neonx2 gen() 13229 MB/s Jul 14 21:59:55.214469 kernel: raid6: neonx1 gen() 10492 MB/s Jul 14 21:59:55.231468 kernel: raid6: int64x8 gen() 6968 MB/s Jul 14 21:59:55.248477 kernel: raid6: int64x4 gen() 7337 MB/s Jul 14 21:59:55.265468 kernel: raid6: int64x2 gen() 6133 MB/s Jul 14 21:59:55.282468 kernel: raid6: int64x1 gen() 5061 MB/s Jul 14 21:59:55.282484 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Jul 14 21:59:55.299475 kernel: raid6: .... xor() 11917 MB/s, rmw enabled Jul 14 21:59:55.299489 kernel: raid6: using neon recovery algorithm Jul 14 21:59:55.304769 kernel: xor: measuring software checksum speed Jul 14 21:59:55.304787 kernel: 8regs : 19793 MB/sec Jul 14 21:59:55.305817 kernel: 32regs : 19650 MB/sec Jul 14 21:59:55.305830 kernel: arm64_neon : 27105 MB/sec Jul 14 21:59:55.305840 kernel: xor: using function: arm64_neon (27105 MB/sec) Jul 14 21:59:55.359877 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 21:59:55.376436 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:59:55.386639 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:59:55.398411 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 14 21:59:55.401610 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:59:55.414652 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 21:59:55.427076 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jul 14 21:59:55.455573 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:59:55.464642 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:59:55.506498 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:59:55.513911 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 21:59:55.524992 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 21:59:55.526369 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:59:55.527718 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:59:55.529360 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:59:55.544862 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 21:59:55.551943 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 14 21:59:55.552114 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:59:55.555678 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:59:55.555717 kernel: GPT:9289727 != 19775487 Jul 14 21:59:55.555736 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:59:55.555626 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:59:55.559541 kernel: GPT:9289727 != 19775487 Jul 14 21:59:55.559568 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:59:55.559588 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:59:55.559715 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:59:55.559836 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:59:55.562603 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:59:55.563809 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:59:55.563976 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:59:55.565684 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:59:55.572821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:59:55.582516 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (510) Jul 14 21:59:55.584513 kernel: BTRFS: device fsid a239cc51-2249-4f1a-8861-421a0d84a369 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (523) Jul 14 21:59:55.587368 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:59:55.591871 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 21:59:55.599914 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 21:59:55.604483 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:59:55.608191 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 21:59:55.609143 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 21:59:55.622650 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 21:59:55.624878 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:59:55.629978 disk-uuid[551]: Primary Header is updated. Jul 14 21:59:55.629978 disk-uuid[551]: Secondary Entries is updated. Jul 14 21:59:55.629978 disk-uuid[551]: Secondary Header is updated. Jul 14 21:59:55.633516 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:59:55.651706 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:59:56.652553 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:59:56.653246 disk-uuid[552]: The operation has completed successfully. Jul 14 21:59:56.672335 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:59:56.672465 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 21:59:56.695634 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 21:59:56.698736 sh[573]: Success Jul 14 21:59:56.717482 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 21:59:56.755383 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 21:59:56.773120 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 21:59:56.775120 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 21:59:56.785720 kernel: BTRFS info (device dm-0): first mount of filesystem a239cc51-2249-4f1a-8861-421a0d84a369 Jul 14 21:59:56.785759 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:59:56.785771 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 21:59:56.786620 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 21:59:56.787750 kernel: BTRFS info (device dm-0): using free space tree Jul 14 21:59:56.791354 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 21:59:56.792679 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 21:59:56.797616 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 21:59:56.798975 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 21:59:56.805977 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:59:56.806018 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:59:56.806030 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:59:56.808470 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:59:56.820274 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 21:59:56.821605 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:59:56.828693 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 21:59:56.836754 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 21:59:56.937731 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:59:56.955742 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:59:57.001012 systemd-networkd[764]: lo: Link UP Jul 14 21:59:57.001024 systemd-networkd[764]: lo: Gained carrier Jul 14 21:59:57.001729 systemd-networkd[764]: Enumeration completed Jul 14 21:59:57.002336 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:59:57.002340 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:59:57.003205 systemd-networkd[764]: eth0: Link UP Jul 14 21:59:57.003208 systemd-networkd[764]: eth0: Gained carrier Jul 14 21:59:57.003214 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:59:57.004692 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:59:57.005653 systemd[1]: Reached target network.target - Network. Jul 14 21:59:57.036506 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:59:57.055727 ignition[662]: Ignition 2.19.0 Jul 14 21:59:57.055738 ignition[662]: Stage: fetch-offline Jul 14 21:59:57.055775 ignition[662]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:59:57.055784 ignition[662]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:59:57.056064 ignition[662]: parsed url from cmdline: "" Jul 14 21:59:57.056067 ignition[662]: no config URL provided Jul 14 21:59:57.056072 ignition[662]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:59:57.056079 ignition[662]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:59:57.056103 ignition[662]: op(1): [started] loading QEMU firmware config module Jul 14 21:59:57.056107 ignition[662]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:59:57.067062 ignition[662]: op(1): [finished] loading QEMU firmware config module Jul 14 21:59:57.104723 ignition[662]: parsing config with SHA512: 2b7f490cd890b34fc15d0f8ba599e4454b0e32a221f28e9c5da02a70eacc3d02b8cf4ea1947765363f2619b924ebe838560c55425137c594203414089f74f924 Jul 14 21:59:57.108624 unknown[662]: fetched base config from "system" Jul 14 21:59:57.108634 unknown[662]: fetched user config from "qemu" Jul 14 21:59:57.109034 ignition[662]: fetch-offline: fetch-offline passed Jul 14 21:59:57.110954 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:59:57.109091 ignition[662]: Ignition finished successfully Jul 14 21:59:57.112175 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:59:57.123668 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 21:59:57.134220 ignition[772]: Ignition 2.19.0 Jul 14 21:59:57.134229 ignition[772]: Stage: kargs Jul 14 21:59:57.134391 ignition[772]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:59:57.134400 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:59:57.135264 ignition[772]: kargs: kargs passed Jul 14 21:59:57.135308 ignition[772]: Ignition finished successfully Jul 14 21:59:57.137166 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 21:59:57.139205 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 21:59:57.152490 ignition[780]: Ignition 2.19.0 Jul 14 21:59:57.152500 ignition[780]: Stage: disks Jul 14 21:59:57.152676 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:59:57.152686 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:59:57.153522 ignition[780]: disks: disks passed Jul 14 21:59:57.155095 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 21:59:57.153572 ignition[780]: Ignition finished successfully Jul 14 21:59:57.157692 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 21:59:57.158494 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 21:59:57.159931 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:59:57.161268 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:59:57.162545 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:59:57.171654 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 21:59:57.182337 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 21:59:57.185870 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 21:59:57.187673 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 21:59:57.236476 kernel: EXT4-fs (vda9): mounted filesystem a9f35e2f-e295-4589-8fb4-4b611a8bb71c r/w with ordered data mode. Quota mode: none. Jul 14 21:59:57.237301 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 21:59:57.238407 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 21:59:57.249565 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:59:57.251489 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 21:59:57.252288 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 21:59:57.252335 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:59:57.252360 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:59:57.259540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 21:59:57.262925 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (798) Jul 14 21:59:57.262946 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:59:57.262957 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:59:57.262968 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:59:57.263149 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 21:59:57.265572 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:59:57.267249 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:59:57.304505 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:59:57.308726 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:59:57.312893 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:59:57.316495 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:59:57.392103 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 21:59:57.406570 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 21:59:57.408001 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 21:59:57.413468 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:59:57.431424 ignition[912]: INFO : Ignition 2.19.0 Jul 14 21:59:57.431424 ignition[912]: INFO : Stage: mount Jul 14 21:59:57.434089 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:59:57.434089 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:59:57.434089 ignition[912]: INFO : mount: mount passed Jul 14 21:59:57.434089 ignition[912]: INFO : Ignition finished successfully Jul 14 21:59:57.431529 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 21:59:57.434291 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 21:59:57.441544 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 21:59:57.785046 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 21:59:57.797717 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:59:57.803739 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (927) Jul 14 21:59:57.803786 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:59:57.803798 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:59:57.804942 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:59:57.807498 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:59:57.808004 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:59:57.825571 ignition[945]: INFO : Ignition 2.19.0 Jul 14 21:59:57.825571 ignition[945]: INFO : Stage: files Jul 14 21:59:57.826780 ignition[945]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:59:57.826780 ignition[945]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:59:57.826780 ignition[945]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:59:57.830037 ignition[945]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:59:57.830037 ignition[945]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:59:57.833011 ignition[945]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:59:57.834001 ignition[945]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:59:57.834001 ignition[945]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:59:57.833513 unknown[945]: wrote ssh authorized keys file for user: core Jul 14 21:59:57.836793 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 21:59:57.836793 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 14 21:59:58.043698 systemd-networkd[764]: eth0: Gained IPv6LL Jul 14 22:00:07.939470 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 22:00:08.250309 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 14 22:00:08.250309 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 22:00:08.253138 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 14 22:00:38.642523 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 14 22:00:39.068926 ignition[945]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 14 22:00:39.068926 ignition[945]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 14 22:00:39.072155 ignition[945]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:00:39.072155 ignition[945]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:00:39.072155 ignition[945]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 14 22:00:39.072155 ignition[945]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 14 22:00:39.072155 ignition[945]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:00:39.072155 ignition[945]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:00:39.072155 ignition[945]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 14 22:00:39.072155 ignition[945]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:00:39.091619 ignition[945]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:00:39.095655 ignition[945]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:00:39.097765 ignition[945]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:00:39.097765 ignition[945]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:00:39.097765 ignition[945]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:00:39.097765 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:00:39.097765 ignition[945]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:00:39.097765 ignition[945]: INFO : files: files passed Jul 14 22:00:39.097765 ignition[945]: INFO : Ignition finished successfully Jul 14 22:00:39.098363 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:00:39.106622 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:00:39.108687 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:00:39.110178 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:00:39.110264 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:00:39.117339 initrd-setup-root-after-ignition[974]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 22:00:39.120508 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:00:39.120508 initrd-setup-root-after-ignition[976]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:00:39.122633 initrd-setup-root-after-ignition[980]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:00:39.124553 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:00:39.125628 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:00:39.136624 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:00:39.155899 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:00:39.156009 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:00:39.157621 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:00:39.158944 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:00:39.160225 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:00:39.161029 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:00:39.177005 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:00:39.184780 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:00:39.192347 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:00:39.193294 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:00:39.194762 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:00:39.196038 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:00:39.196158 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:00:39.197955 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:00:39.199360 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:00:39.200598 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:00:39.201807 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:00:39.203199 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:00:39.204601 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:00:39.205914 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:00:39.207370 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:00:39.208807 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:00:39.210057 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:00:39.211264 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:00:39.211387 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:00:39.213065 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:00:39.214439 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:00:39.215837 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:00:39.216540 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:00:39.218002 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:00:39.218113 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:00:39.220256 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:00:39.220364 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:00:39.221915 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:00:39.223146 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:00:39.226507 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:00:39.227512 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:00:39.229169 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:00:39.230371 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:00:39.230481 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:00:39.231659 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:00:39.231735 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:00:39.232922 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:00:39.233023 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:00:39.234406 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:00:39.234523 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:00:39.246684 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:00:39.247379 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:00:39.247528 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:00:39.252676 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:00:39.253340 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:00:39.253480 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:00:39.254910 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:00:39.259132 ignition[1000]: INFO : Ignition 2.19.0 Jul 14 22:00:39.259132 ignition[1000]: INFO : Stage: umount Jul 14 22:00:39.259132 ignition[1000]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:00:39.259132 ignition[1000]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:00:39.259132 ignition[1000]: INFO : umount: umount passed Jul 14 22:00:39.259132 ignition[1000]: INFO : Ignition finished successfully Jul 14 22:00:39.255008 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:00:39.259880 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:00:39.259966 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:00:39.261801 systemd[1]: Stopped target network.target - Network. Jul 14 22:00:39.262554 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:00:39.262612 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:00:39.264090 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:00:39.264126 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:00:39.265444 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:00:39.265495 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:00:39.267050 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:00:39.267089 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:00:39.269001 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:00:39.270301 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:00:39.272331 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:00:39.272928 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:00:39.273011 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:00:39.274801 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:00:39.274871 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:00:39.277369 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:00:39.277422 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:00:39.280582 systemd-networkd[764]: eth0: DHCPv6 lease lost Jul 14 22:00:39.281443 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:00:39.281566 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:00:39.283448 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:00:39.283608 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:00:39.285774 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:00:39.285832 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:00:39.295596 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:00:39.296297 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:00:39.296346 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:00:39.297931 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:00:39.297971 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:00:39.299415 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:00:39.299477 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:00:39.301095 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:00:39.301134 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:00:39.302758 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:00:39.311213 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:00:39.311336 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:00:39.318167 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:00:39.318319 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:00:39.320085 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:00:39.320124 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:00:39.321449 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:00:39.321487 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:00:39.322921 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:00:39.322963 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:00:39.325118 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:00:39.325181 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:00:39.327157 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:00:39.327197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:00:39.337647 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:00:39.338440 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:00:39.338513 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:00:39.340176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:00:39.340213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:00:39.344865 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:00:39.345745 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:00:39.346818 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:00:39.348922 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:00:39.357814 systemd[1]: Switching root. Jul 14 22:00:39.390527 systemd-journald[238]: Journal stopped Jul 14 22:00:40.442784 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 14 22:00:40.442850 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:00:40.442863 kernel: SELinux: policy capability open_perms=1 Jul 14 22:00:40.442873 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:00:40.442883 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:00:40.442893 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:00:40.442903 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:00:40.442917 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:00:40.442927 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:00:40.442938 kernel: audit: type=1403 audit(1752530439.861:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:00:40.442953 systemd[1]: Successfully loaded SELinux policy in 32.529ms. Jul 14 22:00:40.442971 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.705ms. Jul 14 22:00:40.442983 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:00:40.442996 systemd[1]: Detected virtualization kvm. Jul 14 22:00:40.443008 systemd[1]: Detected architecture arm64. Jul 14 22:00:40.443018 systemd[1]: Detected first boot. Jul 14 22:00:40.443029 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:00:40.443040 zram_generator::config[1045]: No configuration found. Jul 14 22:00:40.443053 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:00:40.443069 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 22:00:40.443080 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 22:00:40.443091 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 22:00:40.443102 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:00:40.443113 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:00:40.443124 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:00:40.443135 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:00:40.443148 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:00:40.443160 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:00:40.443171 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:00:40.443183 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:00:40.443193 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:00:40.443205 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:00:40.443217 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:00:40.443228 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:00:40.443239 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:00:40.443251 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:00:40.443262 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 14 22:00:40.443273 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:00:40.443284 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 22:00:40.443295 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 22:00:40.443307 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 22:00:40.443318 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:00:40.443330 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:00:40.443341 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:00:40.443351 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:00:40.443362 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:00:40.443373 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:00:40.443384 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:00:40.443395 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:00:40.443406 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:00:40.443417 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:00:40.443436 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:00:40.443468 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:00:40.443481 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:00:40.443493 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:00:40.443504 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:00:40.443515 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:00:40.443525 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:00:40.443536 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:00:40.443547 systemd[1]: Reached target machines.target - Containers. Jul 14 22:00:40.443560 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:00:40.443571 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:00:40.443582 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:00:40.443593 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:00:40.443604 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:00:40.443615 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:00:40.443626 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:00:40.443637 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:00:40.443648 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:00:40.443662 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:00:40.443673 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 22:00:40.443685 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 22:00:40.443695 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 22:00:40.443707 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 22:00:40.443719 kernel: fuse: init (API version 7.39) Jul 14 22:00:40.443729 kernel: loop: module loaded Jul 14 22:00:40.443739 kernel: ACPI: bus type drm_connector registered Jul 14 22:00:40.443751 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:00:40.443762 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:00:40.443773 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:00:40.443785 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:00:40.443795 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:00:40.443806 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 22:00:40.443816 systemd[1]: Stopped verity-setup.service. Jul 14 22:00:40.443827 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:00:40.443839 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:00:40.443874 systemd-journald[1109]: Collecting audit messages is disabled. Jul 14 22:00:40.443897 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:00:40.443908 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:00:40.443920 systemd-journald[1109]: Journal started Jul 14 22:00:40.443944 systemd-journald[1109]: Runtime Journal (/run/log/journal/45bd0cb895414fc4a594f88e0a4fafdb) is 5.9M, max 47.3M, 41.4M free. Jul 14 22:00:40.274004 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:00:40.286318 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 22:00:40.286690 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 22:00:40.447968 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:00:40.447120 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:00:40.448199 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:00:40.450046 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:00:40.451314 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:00:40.451505 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:00:40.453013 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:00:40.453148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:00:40.454396 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:00:40.455535 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:00:40.455674 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:00:40.456746 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:00:40.456880 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:00:40.458120 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:00:40.458251 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:00:40.459392 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:00:40.461572 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:00:40.462651 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:00:40.464128 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:00:40.465437 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:00:40.478128 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:00:40.484555 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:00:40.486438 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:00:40.487296 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:00:40.487329 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:00:40.489083 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 22:00:40.491048 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:00:40.493629 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:00:40.494482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:00:40.495803 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:00:40.498219 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:00:40.499259 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:00:40.502631 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:00:40.505095 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:00:40.505271 systemd-journald[1109]: Time spent on flushing to /var/log/journal/45bd0cb895414fc4a594f88e0a4fafdb is 27.571ms for 850 entries. Jul 14 22:00:40.505271 systemd-journald[1109]: System Journal (/var/log/journal/45bd0cb895414fc4a594f88e0a4fafdb) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:00:40.537328 systemd-journald[1109]: Received client request to flush runtime journal. Jul 14 22:00:40.537404 kernel: loop0: detected capacity change from 0 to 114432 Jul 14 22:00:40.506816 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:00:40.511694 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:00:40.516734 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:00:40.519148 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:00:40.520522 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:00:40.521898 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:00:40.523099 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:00:40.524355 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:00:40.531404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:00:40.532858 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:00:40.545025 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 22:00:40.547669 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:00:40.550656 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 22:00:40.554053 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:00:40.567649 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:00:40.571881 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:00:40.572570 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 22:00:40.580754 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:00:40.582469 kernel: loop1: detected capacity change from 0 to 203944 Jul 14 22:00:40.582472 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 22:00:40.602597 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jul 14 22:00:40.602947 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Jul 14 22:00:40.606503 kernel: loop2: detected capacity change from 0 to 114328 Jul 14 22:00:40.608747 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:00:40.636496 kernel: loop3: detected capacity change from 0 to 114432 Jul 14 22:00:40.641538 kernel: loop4: detected capacity change from 0 to 203944 Jul 14 22:00:40.647494 kernel: loop5: detected capacity change from 0 to 114328 Jul 14 22:00:40.650420 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 22:00:40.650861 (sd-merge)[1181]: Merged extensions into '/usr'. Jul 14 22:00:40.655438 systemd[1]: Reloading requested from client PID 1156 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:00:40.655589 systemd[1]: Reloading... Jul 14 22:00:40.716494 zram_generator::config[1207]: No configuration found. Jul 14 22:00:40.775825 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:00:40.816137 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:00:40.852255 systemd[1]: Reloading finished in 196 ms. Jul 14 22:00:40.890552 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:00:40.891654 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:00:40.906879 systemd[1]: Starting ensure-sysext.service... Jul 14 22:00:40.910926 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:00:40.927575 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:00:40.927856 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:00:40.928542 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:00:40.928785 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jul 14 22:00:40.928843 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Jul 14 22:00:40.931062 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:00:40.931076 systemd[1]: Reloading... Jul 14 22:00:40.933008 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:00:40.933013 systemd-tmpfiles[1242]: Skipping /boot Jul 14 22:00:40.939994 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:00:40.940008 systemd-tmpfiles[1242]: Skipping /boot Jul 14 22:00:40.969558 zram_generator::config[1269]: No configuration found. Jul 14 22:00:41.055302 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:00:41.091121 systemd[1]: Reloading finished in 159 ms. Jul 14 22:00:41.112815 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:00:41.126973 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:00:41.135068 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:00:41.137487 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:00:41.139540 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:00:41.144710 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:00:41.151399 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:00:41.154781 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:00:41.158220 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:00:41.159697 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:00:41.164753 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:00:41.167830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:00:41.168648 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:00:41.170625 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:00:41.175399 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:00:41.175566 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:00:41.176966 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:00:41.178499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:00:41.178649 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:00:41.179976 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:00:41.180167 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:00:41.188126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:00:41.189046 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Jul 14 22:00:41.202749 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:00:41.207680 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:00:41.213691 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:00:41.215108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:00:41.220822 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:00:41.223063 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:00:41.226250 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:00:41.227673 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:00:41.229300 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:00:41.229492 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:00:41.231118 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:00:41.231254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:00:41.233950 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:00:41.235148 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:00:41.235332 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:00:41.242251 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:00:41.251938 augenrules[1349]: No rules Jul 14 22:00:41.255801 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:00:41.263828 systemd[1]: Finished ensure-sysext.service. Jul 14 22:00:41.272204 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 14 22:00:41.274476 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1350) Jul 14 22:00:41.274789 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:00:41.289788 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:00:41.293110 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:00:41.295997 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:00:41.299573 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:00:41.300351 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:00:41.306639 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:00:41.311829 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:00:41.312747 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:00:41.313379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:00:41.313600 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:00:41.314703 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:00:41.314861 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:00:41.316815 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:00:41.316953 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:00:41.318748 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:00:41.318897 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:00:41.327504 systemd-resolved[1309]: Positive Trust Anchors: Jul 14 22:00:41.327760 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:00:41.327841 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:00:41.333215 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:00:41.335384 systemd-resolved[1309]: Defaulting to hostname 'linux'. Jul 14 22:00:41.340009 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:00:41.340891 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:00:41.340947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:00:41.342100 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:00:41.342987 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:00:41.362497 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:00:41.387960 systemd-networkd[1380]: lo: Link UP Jul 14 22:00:41.387973 systemd-networkd[1380]: lo: Gained carrier Jul 14 22:00:41.388910 systemd-networkd[1380]: Enumeration completed Jul 14 22:00:41.389040 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:00:41.390106 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:00:41.392371 systemd[1]: Reached target network.target - Network. Jul 14 22:00:41.393178 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:00:41.394617 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:00:41.394629 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:00:41.395749 systemd-networkd[1380]: eth0: Link UP Jul 14 22:00:41.395758 systemd-networkd[1380]: eth0: Gained carrier Jul 14 22:00:41.395772 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:00:41.400624 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:00:41.407229 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:00:41.409519 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:00:41.412386 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jul 14 22:00:41.414878 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:00:41.414937 systemd-timesyncd[1382]: Initial clock synchronization to Mon 2025-07-14 22:00:41.320633 UTC. Jul 14 22:00:41.416807 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 22:00:41.425699 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 22:00:41.436697 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:00:41.452220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:00:41.473026 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 22:00:41.474153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:00:41.475621 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:00:41.476449 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:00:41.477290 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:00:41.478399 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:00:41.479306 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:00:41.480231 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:00:41.481105 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:00:41.481138 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:00:41.481771 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:00:41.483256 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:00:41.485320 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:00:41.495375 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:00:41.497382 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 22:00:41.498727 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:00:41.499586 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:00:41.500259 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:00:41.500996 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:00:41.501025 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:00:41.501907 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:00:41.503661 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:00:41.506589 lvm[1410]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:00:41.507595 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:00:41.510371 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:00:41.514048 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:00:41.517676 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:00:41.521136 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 22:00:41.526756 jq[1413]: false Jul 14 22:00:41.525566 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:00:41.527400 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:00:41.531476 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:00:41.539990 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:00:41.540538 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:00:41.541221 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:00:41.543362 extend-filesystems[1414]: Found loop3 Jul 14 22:00:41.543362 extend-filesystems[1414]: Found loop4 Jul 14 22:00:41.543362 extend-filesystems[1414]: Found loop5 Jul 14 22:00:41.543362 extend-filesystems[1414]: Found vda Jul 14 22:00:41.543362 extend-filesystems[1414]: Found vda1 Jul 14 22:00:41.543362 extend-filesystems[1414]: Found vda2 Jul 14 22:00:41.543362 extend-filesystems[1414]: Found vda3 Jul 14 22:00:41.543362 extend-filesystems[1414]: Found usr Jul 14 22:00:41.543362 extend-filesystems[1414]: Found vda4 Jul 14 22:00:41.543362 extend-filesystems[1414]: Found vda6 Jul 14 22:00:41.543362 extend-filesystems[1414]: Found vda7 Jul 14 22:00:41.543362 extend-filesystems[1414]: Found vda9 Jul 14 22:00:41.543362 extend-filesystems[1414]: Checking size of /dev/vda9 Jul 14 22:00:41.545953 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:00:41.553232 dbus-daemon[1412]: [system] SELinux support is enabled Jul 14 22:00:41.576769 extend-filesystems[1414]: Resized partition /dev/vda9 Jul 14 22:00:41.549565 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 22:00:41.577557 jq[1425]: true Jul 14 22:00:41.553633 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:00:41.559194 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:00:41.559392 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:00:41.560249 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:00:41.562013 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:00:41.591502 extend-filesystems[1436]: resize2fs 1.47.1 (20-May-2024) Jul 14 22:00:41.592586 jq[1437]: true Jul 14 22:00:41.581984 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:00:41.584510 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:00:41.594475 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:00:41.594538 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1370) Jul 14 22:00:41.608828 tar[1435]: linux-arm64/helm Jul 14 22:00:41.606490 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:00:41.606527 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:00:41.607695 (ntainerd)[1440]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:00:41.609203 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:00:41.609223 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:00:41.623166 update_engine[1424]: I20250714 22:00:41.622853 1424 main.cc:92] Flatcar Update Engine starting Jul 14 22:00:41.628514 systemd-logind[1420]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 22:00:41.631719 systemd-logind[1420]: New seat seat0. Jul 14 22:00:41.632977 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:00:41.636356 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:00:41.637564 update_engine[1424]: I20250714 22:00:41.637012 1424 update_check_scheduler.cc:74] Next update check in 3m36s Jul 14 22:00:41.642120 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:00:41.642535 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:00:41.656153 extend-filesystems[1436]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:00:41.656153 extend-filesystems[1436]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:00:41.656153 extend-filesystems[1436]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:00:41.655740 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:00:41.660957 extend-filesystems[1414]: Resized filesystem in /dev/vda9 Jul 14 22:00:41.655915 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:00:41.663851 bash[1466]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:00:41.669622 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:00:41.671586 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:00:41.691616 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:00:41.814493 containerd[1440]: time="2025-07-14T22:00:41.814342400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 22:00:41.840363 containerd[1440]: time="2025-07-14T22:00:41.840001120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:00:41.841496 containerd[1440]: time="2025-07-14T22:00:41.841438960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:00:41.841732 containerd[1440]: time="2025-07-14T22:00:41.841623120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:00:41.841732 containerd[1440]: time="2025-07-14T22:00:41.841647960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:00:41.842147 containerd[1440]: time="2025-07-14T22:00:41.841986560Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 22:00:41.842147 containerd[1440]: time="2025-07-14T22:00:41.842013200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 22:00:41.842147 containerd[1440]: time="2025-07-14T22:00:41.842075400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:00:41.842147 containerd[1440]: time="2025-07-14T22:00:41.842087920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:00:41.842829 containerd[1440]: time="2025-07-14T22:00:41.842447600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:00:41.842829 containerd[1440]: time="2025-07-14T22:00:41.842484760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:00:41.842829 containerd[1440]: time="2025-07-14T22:00:41.842498800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:00:41.842829 containerd[1440]: time="2025-07-14T22:00:41.842508760Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:00:41.842829 containerd[1440]: time="2025-07-14T22:00:41.842603520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:00:41.842829 containerd[1440]: time="2025-07-14T22:00:41.842793960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:00:41.843207 containerd[1440]: time="2025-07-14T22:00:41.843182760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:00:41.843333 containerd[1440]: time="2025-07-14T22:00:41.843317160Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:00:41.843553 containerd[1440]: time="2025-07-14T22:00:41.843531640Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:00:41.843839 containerd[1440]: time="2025-07-14T22:00:41.843675600Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:00:41.848672 containerd[1440]: time="2025-07-14T22:00:41.848645760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:00:41.848782 containerd[1440]: time="2025-07-14T22:00:41.848769200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:00:41.848945 containerd[1440]: time="2025-07-14T22:00:41.848928120Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 22:00:41.849034 containerd[1440]: time="2025-07-14T22:00:41.849016480Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849100280Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849245960Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849572720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849676000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849697480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849710400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849723440Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849738800Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849751800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849765320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849779280Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849791640Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849805600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:00:41.851001 containerd[1440]: time="2025-07-14T22:00:41.849817480Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849838720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849851600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849863560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849881760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849894800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849907880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849920480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849933840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849946760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849960480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849972480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849983160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.849996120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.850011240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 22:00:41.851292 containerd[1440]: time="2025-07-14T22:00:41.850033640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851569 containerd[1440]: time="2025-07-14T22:00:41.850045760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851569 containerd[1440]: time="2025-07-14T22:00:41.850057440Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:00:41.851569 containerd[1440]: time="2025-07-14T22:00:41.850158400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:00:41.851569 containerd[1440]: time="2025-07-14T22:00:41.850173400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 22:00:41.851569 containerd[1440]: time="2025-07-14T22:00:41.850184080Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:00:41.851569 containerd[1440]: time="2025-07-14T22:00:41.850195600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 22:00:41.851569 containerd[1440]: time="2025-07-14T22:00:41.850204480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851569 containerd[1440]: time="2025-07-14T22:00:41.850215840Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 22:00:41.851569 containerd[1440]: time="2025-07-14T22:00:41.850227520Z" level=info msg="NRI interface is disabled by configuration." Jul 14 22:00:41.851569 containerd[1440]: time="2025-07-14T22:00:41.850239320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:00:41.851734 containerd[1440]: time="2025-07-14T22:00:41.850603960Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:00:41.851734 containerd[1440]: time="2025-07-14T22:00:41.850663000Z" level=info msg="Connect containerd service" Jul 14 22:00:41.851734 containerd[1440]: time="2025-07-14T22:00:41.850756520Z" level=info msg="using legacy CRI server" Jul 14 22:00:41.851734 containerd[1440]: time="2025-07-14T22:00:41.850762920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:00:41.851734 containerd[1440]: time="2025-07-14T22:00:41.850838200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:00:41.854183 containerd[1440]: time="2025-07-14T22:00:41.854153560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:00:41.854724 containerd[1440]: time="2025-07-14T22:00:41.854701440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:00:41.854977 containerd[1440]: time="2025-07-14T22:00:41.854933400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:00:41.855083 containerd[1440]: time="2025-07-14T22:00:41.854871320Z" level=info msg="Start subscribing containerd event" Jul 14 22:00:41.857531 containerd[1440]: time="2025-07-14T22:00:41.857506000Z" level=info msg="Start recovering state" Jul 14 22:00:41.857689 containerd[1440]: time="2025-07-14T22:00:41.857673760Z" level=info msg="Start event monitor" Jul 14 22:00:41.857743 containerd[1440]: time="2025-07-14T22:00:41.857730680Z" level=info msg="Start snapshots syncer" Jul 14 22:00:41.857790 containerd[1440]: time="2025-07-14T22:00:41.857779480Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:00:41.857838 containerd[1440]: time="2025-07-14T22:00:41.857826800Z" level=info msg="Start streaming server" Jul 14 22:00:41.858071 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:00:41.859564 containerd[1440]: time="2025-07-14T22:00:41.859543200Z" level=info msg="containerd successfully booted in 0.046108s" Jul 14 22:00:41.896668 sshd_keygen[1439]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:00:41.915672 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:00:41.941897 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:00:41.947406 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:00:41.947642 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:00:41.952230 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:00:41.966929 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:00:41.969901 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:00:41.972352 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 14 22:00:41.973438 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:00:41.977945 tar[1435]: linux-arm64/LICENSE Jul 14 22:00:41.978008 tar[1435]: linux-arm64/README.md Jul 14 22:00:41.993842 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 22:00:43.163675 systemd-networkd[1380]: eth0: Gained IPv6LL Jul 14 22:00:43.166116 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:00:43.168630 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:00:43.180714 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 22:00:43.182825 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:00:43.184818 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:00:43.199199 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:00:43.200568 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 22:00:43.202507 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:00:43.209649 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:00:43.818990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:00:43.820181 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:00:43.821493 systemd[1]: Startup finished in 566ms (kernel) + 45.150s (initrd) + 3.996s (userspace) = 49.713s. Jul 14 22:00:43.822477 (kubelet)[1526]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:00:44.231294 kubelet[1526]: E0714 22:00:44.231186 1526 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:00:44.233547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:00:44.233697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:00:47.125373 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:00:47.126542 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:54868.service - OpenSSH per-connection server daemon (10.0.0.1:54868). Jul 14 22:00:47.204312 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 54868 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:00:47.206056 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:00:47.222306 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:00:47.233714 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:00:47.235171 systemd-logind[1420]: New session 1 of user core. Jul 14 22:00:47.243493 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:00:47.245718 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:00:47.251634 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:00:47.321679 systemd[1543]: Queued start job for default target default.target. Jul 14 22:00:47.330437 systemd[1543]: Created slice app.slice - User Application Slice. Jul 14 22:00:47.330490 systemd[1543]: Reached target paths.target - Paths. Jul 14 22:00:47.330504 systemd[1543]: Reached target timers.target - Timers. Jul 14 22:00:47.331737 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:00:47.341226 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:00:47.341286 systemd[1543]: Reached target sockets.target - Sockets. Jul 14 22:00:47.341298 systemd[1543]: Reached target basic.target - Basic System. Jul 14 22:00:47.341336 systemd[1543]: Reached target default.target - Main User Target. Jul 14 22:00:47.341362 systemd[1543]: Startup finished in 85ms. Jul 14 22:00:47.341638 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:00:47.351613 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:00:47.411323 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:54876.service - OpenSSH per-connection server daemon (10.0.0.1:54876). Jul 14 22:00:47.443682 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 54876 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:00:47.444982 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:00:47.449678 systemd-logind[1420]: New session 2 of user core. Jul 14 22:00:47.458614 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:00:47.510285 sshd[1554]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:47.521889 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:54876.service: Deactivated successfully. Jul 14 22:00:47.523291 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:00:47.524566 systemd-logind[1420]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:00:47.525685 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:54880.service - OpenSSH per-connection server daemon (10.0.0.1:54880). Jul 14 22:00:47.526532 systemd-logind[1420]: Removed session 2. Jul 14 22:00:47.557920 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 54880 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:00:47.559250 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:00:47.563419 systemd-logind[1420]: New session 3 of user core. Jul 14 22:00:47.575624 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:00:47.623064 sshd[1561]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:47.638082 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:54880.service: Deactivated successfully. Jul 14 22:00:47.640751 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:00:47.642602 systemd-logind[1420]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:00:47.643170 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:54882.service - OpenSSH per-connection server daemon (10.0.0.1:54882). Jul 14 22:00:47.644137 systemd-logind[1420]: Removed session 3. Jul 14 22:00:47.674718 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 54882 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:00:47.675902 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:00:47.679469 systemd-logind[1420]: New session 4 of user core. Jul 14 22:00:47.686584 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:00:47.737539 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:47.753302 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:54882.service: Deactivated successfully. Jul 14 22:00:47.755029 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:00:47.756428 systemd-logind[1420]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:00:47.769708 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:54884.service - OpenSSH per-connection server daemon (10.0.0.1:54884). Jul 14 22:00:47.770623 systemd-logind[1420]: Removed session 4. Jul 14 22:00:47.798587 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 54884 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:00:47.799916 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:00:47.804055 systemd-logind[1420]: New session 5 of user core. Jul 14 22:00:47.822637 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:00:47.880926 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:00:47.881215 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:00:47.902418 sudo[1578]: pam_unix(sudo:session): session closed for user root Jul 14 22:00:47.904105 sshd[1575]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:47.924069 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:54884.service: Deactivated successfully. Jul 14 22:00:47.925615 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:00:47.927692 systemd-logind[1420]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:00:47.928928 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:54896.service - OpenSSH per-connection server daemon (10.0.0.1:54896). Jul 14 22:00:47.929734 systemd-logind[1420]: Removed session 5. Jul 14 22:00:47.960834 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 54896 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:00:47.962044 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:00:47.966022 systemd-logind[1420]: New session 6 of user core. Jul 14 22:00:47.974597 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:00:48.025699 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:00:48.026003 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:00:48.028947 sudo[1587]: pam_unix(sudo:session): session closed for user root Jul 14 22:00:48.033483 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:00:48.033764 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:00:48.051718 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 22:00:48.052943 auditctl[1590]: No rules Jul 14 22:00:48.053879 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:00:48.054119 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 22:00:48.055954 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:00:48.078963 augenrules[1608]: No rules Jul 14 22:00:48.080230 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:00:48.081193 sudo[1586]: pam_unix(sudo:session): session closed for user root Jul 14 22:00:48.082618 sshd[1583]: pam_unix(sshd:session): session closed for user core Jul 14 22:00:48.092810 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:54896.service: Deactivated successfully. Jul 14 22:00:48.094238 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:00:48.095493 systemd-logind[1420]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:00:48.096577 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:54906.service - OpenSSH per-connection server daemon (10.0.0.1:54906). Jul 14 22:00:48.097255 systemd-logind[1420]: Removed session 6. Jul 14 22:00:48.128179 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 54906 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:00:48.129377 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:00:48.133425 systemd-logind[1420]: New session 7 of user core. Jul 14 22:00:48.140610 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:00:48.191510 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:00:48.192190 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:00:48.513725 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 22:00:48.513784 (dockerd)[1637]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 22:00:48.778879 dockerd[1637]: time="2025-07-14T22:00:48.778733905Z" level=info msg="Starting up" Jul 14 22:00:48.932750 dockerd[1637]: time="2025-07-14T22:00:48.932701518Z" level=info msg="Loading containers: start." Jul 14 22:00:49.010510 kernel: Initializing XFRM netlink socket Jul 14 22:00:49.070361 systemd-networkd[1380]: docker0: Link UP Jul 14 22:00:49.087784 dockerd[1637]: time="2025-07-14T22:00:49.087672717Z" level=info msg="Loading containers: done." Jul 14 22:00:49.098098 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck161566522-merged.mount: Deactivated successfully. Jul 14 22:00:49.098905 dockerd[1637]: time="2025-07-14T22:00:49.098848229Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:00:49.098966 dockerd[1637]: time="2025-07-14T22:00:49.098952624Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 22:00:49.099094 dockerd[1637]: time="2025-07-14T22:00:49.099047066Z" level=info msg="Daemon has completed initialization" Jul 14 22:00:49.129824 dockerd[1637]: time="2025-07-14T22:00:49.129030571Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:00:49.130126 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 22:00:54.236603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:00:54.247651 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:00:54.350204 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:00:54.353772 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:00:54.388066 kubelet[1792]: E0714 22:00:54.388009 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:00:54.391239 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:00:54.391410 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:00:59.431258 containerd[1440]: time="2025-07-14T22:00:59.431217100Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 14 22:01:04.486760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:01:04.498658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:01:04.615996 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:01:04.622539 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:01:04.663298 kubelet[1810]: E0714 22:01:04.663198 1810 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:01:04.665545 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:01:04.665688 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:01:10.680987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1995813480.mount: Deactivated successfully. Jul 14 22:01:11.844570 containerd[1440]: time="2025-07-14T22:01:11.844510160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:11.844909 containerd[1440]: time="2025-07-14T22:01:11.844874151Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" Jul 14 22:01:11.845974 containerd[1440]: time="2025-07-14T22:01:11.845938531Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:11.849390 containerd[1440]: time="2025-07-14T22:01:11.849352654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:11.850939 containerd[1440]: time="2025-07-14T22:01:11.850902835Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 12.419641219s" Jul 14 22:01:11.850978 containerd[1440]: time="2025-07-14T22:01:11.850941305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" Jul 14 22:01:11.855635 containerd[1440]: time="2025-07-14T22:01:11.855592086Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 14 22:01:13.282995 containerd[1440]: time="2025-07-14T22:01:13.282943984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:13.283475 containerd[1440]: time="2025-07-14T22:01:13.283431773Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" Jul 14 22:01:13.284215 containerd[1440]: time="2025-07-14T22:01:13.284177353Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:13.287270 containerd[1440]: time="2025-07-14T22:01:13.287233740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:13.288413 containerd[1440]: time="2025-07-14T22:01:13.288374166Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.432748848s" Jul 14 22:01:13.288466 containerd[1440]: time="2025-07-14T22:01:13.288410359Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" Jul 14 22:01:13.289003 containerd[1440]: time="2025-07-14T22:01:13.288964295Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 14 22:01:14.667059 containerd[1440]: time="2025-07-14T22:01:14.666938507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:14.667555 containerd[1440]: time="2025-07-14T22:01:14.667292996Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" Jul 14 22:01:14.668167 containerd[1440]: time="2025-07-14T22:01:14.668141442Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:14.670866 containerd[1440]: time="2025-07-14T22:01:14.670809049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:14.672077 containerd[1440]: time="2025-07-14T22:01:14.672042861Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.383045491s" Jul 14 22:01:14.672150 containerd[1440]: time="2025-07-14T22:01:14.672079978Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" Jul 14 22:01:14.672758 containerd[1440]: time="2025-07-14T22:01:14.672623010Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 14 22:01:14.736599 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:01:14.749718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:01:14.846098 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:01:14.849653 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:01:14.886850 kubelet[1888]: E0714 22:01:14.886797 1888 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:01:14.889217 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:01:14.889363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:01:15.667825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1942746885.mount: Deactivated successfully. Jul 14 22:01:15.968818 containerd[1440]: time="2025-07-14T22:01:15.968684701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:15.969263 containerd[1440]: time="2025-07-14T22:01:15.969211377Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" Jul 14 22:01:15.969932 containerd[1440]: time="2025-07-14T22:01:15.969899841Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:15.972395 containerd[1440]: time="2025-07-14T22:01:15.972362397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:15.972983 containerd[1440]: time="2025-07-14T22:01:15.972951709Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.300297462s" Jul 14 22:01:15.973017 containerd[1440]: time="2025-07-14T22:01:15.972982986Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" Jul 14 22:01:15.973496 containerd[1440]: time="2025-07-14T22:01:15.973466866Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 22:01:16.523306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873473478.mount: Deactivated successfully. Jul 14 22:01:17.291954 containerd[1440]: time="2025-07-14T22:01:17.291893415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:17.292620 containerd[1440]: time="2025-07-14T22:01:17.292584124Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 14 22:01:17.293309 containerd[1440]: time="2025-07-14T22:01:17.293244876Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:17.296428 containerd[1440]: time="2025-07-14T22:01:17.296383964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:17.297810 containerd[1440]: time="2025-07-14T22:01:17.297721026Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.324219363s" Jul 14 22:01:17.297810 containerd[1440]: time="2025-07-14T22:01:17.297762543Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 22:01:17.298524 containerd[1440]: time="2025-07-14T22:01:17.298291264Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:01:17.714414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3886909464.mount: Deactivated successfully. Jul 14 22:01:17.718474 containerd[1440]: time="2025-07-14T22:01:17.718388106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:17.719059 containerd[1440]: time="2025-07-14T22:01:17.719019539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 14 22:01:17.719780 containerd[1440]: time="2025-07-14T22:01:17.719747445Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:17.722232 containerd[1440]: time="2025-07-14T22:01:17.722195505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:17.723040 containerd[1440]: time="2025-07-14T22:01:17.723008405Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 424.673825ms" Jul 14 22:01:17.723081 containerd[1440]: time="2025-07-14T22:01:17.723043162Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 22:01:17.723658 containerd[1440]: time="2025-07-14T22:01:17.723469411Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 22:01:18.203340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2315997410.mount: Deactivated successfully. Jul 14 22:01:19.793700 containerd[1440]: time="2025-07-14T22:01:19.793637205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:19.794785 containerd[1440]: time="2025-07-14T22:01:19.794329919Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 14 22:01:19.796284 containerd[1440]: time="2025-07-14T22:01:19.796217915Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:19.799429 containerd[1440]: time="2025-07-14T22:01:19.799395025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:19.800819 containerd[1440]: time="2025-07-14T22:01:19.800785494Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.077285644s" Jul 14 22:01:19.800819 containerd[1440]: time="2025-07-14T22:01:19.800817532Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 14 22:01:24.986648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:01:24.996653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:01:25.139912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:01:25.143667 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:01:25.176493 kubelet[2032]: E0714 22:01:25.176428 2032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:01:25.178617 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:01:25.178759 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:01:27.147230 update_engine[1424]: I20250714 22:01:27.147132 1424 update_attempter.cc:509] Updating boot flags... Jul 14 22:01:27.259494 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2047) Jul 14 22:01:27.288969 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2047) Jul 14 22:01:35.236752 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 14 22:01:35.246648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:01:35.347156 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:01:35.350403 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:01:35.381521 kubelet[2076]: E0714 22:01:35.381471 2076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:01:35.383843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:01:35.383984 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:01:35.653325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:01:35.666776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:01:35.690555 systemd[1]: Reloading requested from client PID 2092 ('systemctl') (unit session-7.scope)... Jul 14 22:01:35.690572 systemd[1]: Reloading... Jul 14 22:01:35.752489 zram_generator::config[2131]: No configuration found. Jul 14 22:01:35.944179 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:01:35.996284 systemd[1]: Reloading finished in 305 ms. Jul 14 22:01:36.036250 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:01:36.038089 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:01:36.039882 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:01:36.040081 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:01:36.041389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:01:36.141986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:01:36.145911 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:01:36.175098 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:01:36.175098 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:01:36.175098 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:01:36.175417 kubelet[2178]: I0714 22:01:36.175152 2178 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:01:36.742766 kubelet[2178]: I0714 22:01:36.742724 2178 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:01:36.742766 kubelet[2178]: I0714 22:01:36.742753 2178 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:01:36.743008 kubelet[2178]: I0714 22:01:36.742986 2178 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:01:36.796111 kubelet[2178]: E0714 22:01:36.796073 2178 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:01:36.799330 kubelet[2178]: I0714 22:01:36.799241 2178 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:01:36.804386 kubelet[2178]: E0714 22:01:36.804362 2178 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:01:36.804516 kubelet[2178]: I0714 22:01:36.804502 2178 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:01:36.807916 kubelet[2178]: I0714 22:01:36.807899 2178 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:01:36.809026 kubelet[2178]: I0714 22:01:36.808267 2178 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:01:36.809026 kubelet[2178]: I0714 22:01:36.808386 2178 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:01:36.809026 kubelet[2178]: I0714 22:01:36.808411 2178 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:01:36.809026 kubelet[2178]: I0714 22:01:36.808643 2178 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:01:36.809236 kubelet[2178]: I0714 22:01:36.808651 2178 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:01:36.809236 kubelet[2178]: I0714 22:01:36.808867 2178 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:01:36.812787 kubelet[2178]: I0714 22:01:36.812769 2178 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:01:36.812879 kubelet[2178]: I0714 22:01:36.812868 2178 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:01:36.812942 kubelet[2178]: I0714 22:01:36.812933 2178 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:01:36.813097 kubelet[2178]: I0714 22:01:36.813089 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:01:36.814315 kubelet[2178]: W0714 22:01:36.814268 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 14 22:01:36.814388 kubelet[2178]: E0714 22:01:36.814323 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:01:36.814416 kubelet[2178]: W0714 22:01:36.814383 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 14 22:01:36.814416 kubelet[2178]: E0714 22:01:36.814408 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:01:36.816407 kubelet[2178]: I0714 22:01:36.816392 2178 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:01:36.819327 kubelet[2178]: I0714 22:01:36.819303 2178 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:01:36.819648 kubelet[2178]: W0714 22:01:36.819631 2178 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:01:36.820529 kubelet[2178]: I0714 22:01:36.820509 2178 server.go:1274] "Started kubelet" Jul 14 22:01:36.821278 kubelet[2178]: I0714 22:01:36.821240 2178 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:01:36.822356 kubelet[2178]: I0714 22:01:36.822264 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:01:36.822681 kubelet[2178]: I0714 22:01:36.822660 2178 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:01:36.822929 kubelet[2178]: I0714 22:01:36.822910 2178 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:01:36.823900 kubelet[2178]: I0714 22:01:36.823873 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:01:36.824410 kubelet[2178]: I0714 22:01:36.824385 2178 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:01:36.824726 kubelet[2178]: I0714 22:01:36.824705 2178 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:01:36.824821 kubelet[2178]: I0714 22:01:36.824782 2178 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:01:36.824855 kubelet[2178]: I0714 22:01:36.824840 2178 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:01:36.825164 kubelet[2178]: W0714 22:01:36.825125 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 14 22:01:36.825218 kubelet[2178]: E0714 22:01:36.825169 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:01:36.825610 kubelet[2178]: E0714 22:01:36.825508 2178 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:01:36.825610 kubelet[2178]: E0714 22:01:36.824564 2178 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523d3399061703 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:01:36.820483843 +0000 UTC m=+0.671727991,LastTimestamp:2025-07-14 22:01:36.820483843 +0000 UTC m=+0.671727991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:01:36.825610 kubelet[2178]: E0714 22:01:36.825589 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Jul 14 22:01:36.826462 kubelet[2178]: I0714 22:01:36.826296 2178 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:01:36.826462 kubelet[2178]: I0714 22:01:36.826391 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:01:36.827040 kubelet[2178]: E0714 22:01:36.826990 2178 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:01:36.827494 kubelet[2178]: I0714 22:01:36.827479 2178 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:01:36.837881 kubelet[2178]: I0714 22:01:36.837772 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:01:36.838708 kubelet[2178]: I0714 22:01:36.838692 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:01:36.839470 kubelet[2178]: I0714 22:01:36.838777 2178 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:01:36.839470 kubelet[2178]: I0714 22:01:36.838804 2178 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:01:36.839470 kubelet[2178]: E0714 22:01:36.838839 2178 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:01:36.839470 kubelet[2178]: W0714 22:01:36.839217 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 14 22:01:36.839470 kubelet[2178]: E0714 22:01:36.839251 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:01:36.841722 kubelet[2178]: I0714 22:01:36.841410 2178 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:01:36.841722 kubelet[2178]: I0714 22:01:36.841424 2178 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:01:36.841722 kubelet[2178]: I0714 22:01:36.841438 2178 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:01:36.911019 kubelet[2178]: I0714 22:01:36.910930 2178 policy_none.go:49] "None policy: Start" Jul 14 22:01:36.911700 kubelet[2178]: I0714 22:01:36.911666 2178 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:01:36.911700 kubelet[2178]: I0714 22:01:36.911692 2178 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:01:36.916937 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 22:01:36.926556 kubelet[2178]: E0714 22:01:36.926528 2178 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:01:36.930636 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 22:01:36.933264 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 22:01:36.938903 kubelet[2178]: E0714 22:01:36.938875 2178 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:01:36.943158 kubelet[2178]: I0714 22:01:36.943063 2178 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:01:36.943252 kubelet[2178]: I0714 22:01:36.943227 2178 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:01:36.943317 kubelet[2178]: I0714 22:01:36.943284 2178 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:01:36.943947 kubelet[2178]: I0714 22:01:36.943727 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:01:36.944896 kubelet[2178]: E0714 22:01:36.944874 2178 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:01:37.026059 kubelet[2178]: E0714 22:01:37.025946 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Jul 14 22:01:37.046264 kubelet[2178]: I0714 22:01:37.046237 2178 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:01:37.046755 kubelet[2178]: E0714 22:01:37.046718 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 14 22:01:37.145886 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. Jul 14 22:01:37.164750 systemd[1]: Created slice kubepods-burstable-pod18e5cb05f3310d1172870412c2f76029.slice - libcontainer container kubepods-burstable-pod18e5cb05f3310d1172870412c2f76029.slice. Jul 14 22:01:37.167757 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. Jul 14 22:01:37.226633 kubelet[2178]: I0714 22:01:37.226595 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:37.227187 kubelet[2178]: I0714 22:01:37.227002 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:37.227187 kubelet[2178]: I0714 22:01:37.227043 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:37.227187 kubelet[2178]: I0714 22:01:37.227063 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:01:37.227187 kubelet[2178]: I0714 22:01:37.227078 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:37.227187 kubelet[2178]: I0714 22:01:37.227103 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:37.227343 kubelet[2178]: I0714 22:01:37.227120 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18e5cb05f3310d1172870412c2f76029-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"18e5cb05f3310d1172870412c2f76029\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:01:37.227343 kubelet[2178]: I0714 22:01:37.227134 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18e5cb05f3310d1172870412c2f76029-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"18e5cb05f3310d1172870412c2f76029\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:01:37.227343 kubelet[2178]: I0714 22:01:37.227149 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18e5cb05f3310d1172870412c2f76029-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"18e5cb05f3310d1172870412c2f76029\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:01:37.248604 kubelet[2178]: I0714 22:01:37.248568 2178 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:01:37.248913 kubelet[2178]: E0714 22:01:37.248887 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 14 22:01:37.426545 kubelet[2178]: E0714 22:01:37.426418 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Jul 14 22:01:37.462823 kubelet[2178]: E0714 22:01:37.462786 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:37.463487 containerd[1440]: time="2025-07-14T22:01:37.463369513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" Jul 14 22:01:37.466576 kubelet[2178]: E0714 22:01:37.466550 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:37.466981 containerd[1440]: time="2025-07-14T22:01:37.466946535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:18e5cb05f3310d1172870412c2f76029,Namespace:kube-system,Attempt:0,}" Jul 14 22:01:37.470484 kubelet[2178]: E0714 22:01:37.470466 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:37.470774 containerd[1440]: time="2025-07-14T22:01:37.470747831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" Jul 14 22:01:37.650571 kubelet[2178]: I0714 22:01:37.650539 2178 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:01:37.650925 kubelet[2178]: E0714 22:01:37.650900 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 14 22:01:37.867480 kubelet[2178]: W0714 22:01:37.867407 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 14 22:01:37.867585 kubelet[2178]: E0714 22:01:37.867484 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:01:37.891283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630058570.mount: Deactivated successfully. Jul 14 22:01:37.894827 containerd[1440]: time="2025-07-14T22:01:37.894787653Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:01:37.896471 containerd[1440]: time="2025-07-14T22:01:37.896436727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 14 22:01:37.897080 containerd[1440]: time="2025-07-14T22:01:37.897050551Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:01:37.897824 containerd[1440]: time="2025-07-14T22:01:37.897795970Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:01:37.898192 containerd[1440]: time="2025-07-14T22:01:37.898166120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:01:37.898555 containerd[1440]: time="2025-07-14T22:01:37.898526110Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:01:37.899157 containerd[1440]: time="2025-07-14T22:01:37.899124774Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:01:37.902512 containerd[1440]: time="2025-07-14T22:01:37.902479522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:01:37.903463 containerd[1440]: time="2025-07-14T22:01:37.903410937Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 439.960186ms" Jul 14 22:01:37.905922 containerd[1440]: time="2025-07-14T22:01:37.905891869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 435.08884ms" Jul 14 22:01:37.906629 containerd[1440]: time="2025-07-14T22:01:37.906605250Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 439.593437ms" Jul 14 22:01:38.019437 kubelet[2178]: W0714 22:01:38.018739 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 14 22:01:38.019437 kubelet[2178]: E0714 22:01:38.018800 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:01:38.027322 containerd[1440]: time="2025-07-14T22:01:38.027199586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:01:38.027322 containerd[1440]: time="2025-07-14T22:01:38.027251905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:01:38.027322 containerd[1440]: time="2025-07-14T22:01:38.027276184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:01:38.027322 containerd[1440]: time="2025-07-14T22:01:38.027241545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:01:38.027322 containerd[1440]: time="2025-07-14T22:01:38.027294024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:01:38.027532 containerd[1440]: time="2025-07-14T22:01:38.027335943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:01:38.027532 containerd[1440]: time="2025-07-14T22:01:38.027426940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:01:38.027955 containerd[1440]: time="2025-07-14T22:01:38.027816450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:01:38.027955 containerd[1440]: time="2025-07-14T22:01:38.027882368Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:01:38.027955 containerd[1440]: time="2025-07-14T22:01:38.027897728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:01:38.028213 containerd[1440]: time="2025-07-14T22:01:38.028031765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:01:38.028303 containerd[1440]: time="2025-07-14T22:01:38.028172241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:01:38.043635 systemd[1]: Started cri-containerd-5dfbe4e240a8445fd97a7c79ce71d8762cb354934317bd59ed0ad6813eabbb7e.scope - libcontainer container 5dfbe4e240a8445fd97a7c79ce71d8762cb354934317bd59ed0ad6813eabbb7e. Jul 14 22:01:38.047693 systemd[1]: Started cri-containerd-8b2ebca274847d97abdb4247a58b886eaa15b724c0ed01331873a55ab5b792f3.scope - libcontainer container 8b2ebca274847d97abdb4247a58b886eaa15b724c0ed01331873a55ab5b792f3. Jul 14 22:01:38.048875 systemd[1]: Started cri-containerd-9076bf5b50475f40e247be0808eaf8e29a0019a2457f9a565cb1248102928061.scope - libcontainer container 9076bf5b50475f40e247be0808eaf8e29a0019a2457f9a565cb1248102928061. Jul 14 22:01:38.074824 kubelet[2178]: W0714 22:01:38.074662 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 14 22:01:38.074824 kubelet[2178]: E0714 22:01:38.074777 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:01:38.081927 containerd[1440]: time="2025-07-14T22:01:38.081626960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5dfbe4e240a8445fd97a7c79ce71d8762cb354934317bd59ed0ad6813eabbb7e\"" Jul 14 22:01:38.082986 kubelet[2178]: E0714 22:01:38.082894 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:38.085960 containerd[1440]: time="2025-07-14T22:01:38.085631855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b2ebca274847d97abdb4247a58b886eaa15b724c0ed01331873a55ab5b792f3\"" Jul 14 22:01:38.086032 containerd[1440]: time="2025-07-14T22:01:38.085643055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:18e5cb05f3310d1172870412c2f76029,Namespace:kube-system,Attempt:0,} returns sandbox id \"9076bf5b50475f40e247be0808eaf8e29a0019a2457f9a565cb1248102928061\"" Jul 14 22:01:38.086032 containerd[1440]: time="2025-07-14T22:01:38.085823410Z" level=info msg="CreateContainer within sandbox \"5dfbe4e240a8445fd97a7c79ce71d8762cb354934317bd59ed0ad6813eabbb7e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:01:38.086967 kubelet[2178]: E0714 22:01:38.086876 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:38.086967 kubelet[2178]: E0714 22:01:38.086884 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:38.088298 containerd[1440]: time="2025-07-14T22:01:38.088268946Z" level=info msg="CreateContainer within sandbox \"8b2ebca274847d97abdb4247a58b886eaa15b724c0ed01331873a55ab5b792f3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:01:38.088974 containerd[1440]: time="2025-07-14T22:01:38.088911969Z" level=info msg="CreateContainer within sandbox \"9076bf5b50475f40e247be0808eaf8e29a0019a2457f9a565cb1248102928061\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:01:38.100380 containerd[1440]: time="2025-07-14T22:01:38.100322350Z" level=info msg="CreateContainer within sandbox \"5dfbe4e240a8445fd97a7c79ce71d8762cb354934317bd59ed0ad6813eabbb7e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"371d693298094924f897c6f214a7f9183eacf5ec0f0d38ac68583b51766a8b84\"" Jul 14 22:01:38.100883 containerd[1440]: time="2025-07-14T22:01:38.100859256Z" level=info msg="StartContainer for \"371d693298094924f897c6f214a7f9183eacf5ec0f0d38ac68583b51766a8b84\"" Jul 14 22:01:38.103646 containerd[1440]: time="2025-07-14T22:01:38.103609944Z" level=info msg="CreateContainer within sandbox \"9076bf5b50475f40e247be0808eaf8e29a0019a2457f9a565cb1248102928061\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"00b8a9e0c0870ecd590dbdd632136c5a2d9bc6a5bd7e407f5c8283f03056ea5f\"" Jul 14 22:01:38.105722 containerd[1440]: time="2025-07-14T22:01:38.104030813Z" level=info msg="StartContainer for \"00b8a9e0c0870ecd590dbdd632136c5a2d9bc6a5bd7e407f5c8283f03056ea5f\"" Jul 14 22:01:38.105722 containerd[1440]: time="2025-07-14T22:01:38.105665810Z" level=info msg="CreateContainer within sandbox \"8b2ebca274847d97abdb4247a58b886eaa15b724c0ed01331873a55ab5b792f3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e82402ee7c0d3f354ba2da8946bfdd59594012bb6af46287523f0aa80d5dd517\"" Jul 14 22:01:38.106732 containerd[1440]: time="2025-07-14T22:01:38.106711503Z" level=info msg="StartContainer for \"e82402ee7c0d3f354ba2da8946bfdd59594012bb6af46287523f0aa80d5dd517\"" Jul 14 22:01:38.127619 systemd[1]: Started cri-containerd-371d693298094924f897c6f214a7f9183eacf5ec0f0d38ac68583b51766a8b84.scope - libcontainer container 371d693298094924f897c6f214a7f9183eacf5ec0f0d38ac68583b51766a8b84. Jul 14 22:01:38.129101 kubelet[2178]: W0714 22:01:38.128310 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 14 22:01:38.129241 kubelet[2178]: E0714 22:01:38.129205 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:01:38.132008 systemd[1]: Started cri-containerd-00b8a9e0c0870ecd590dbdd632136c5a2d9bc6a5bd7e407f5c8283f03056ea5f.scope - libcontainer container 00b8a9e0c0870ecd590dbdd632136c5a2d9bc6a5bd7e407f5c8283f03056ea5f. Jul 14 22:01:38.132853 systemd[1]: Started cri-containerd-e82402ee7c0d3f354ba2da8946bfdd59594012bb6af46287523f0aa80d5dd517.scope - libcontainer container e82402ee7c0d3f354ba2da8946bfdd59594012bb6af46287523f0aa80d5dd517. Jul 14 22:01:38.163991 containerd[1440]: time="2025-07-14T22:01:38.162614438Z" level=info msg="StartContainer for \"371d693298094924f897c6f214a7f9183eacf5ec0f0d38ac68583b51766a8b84\" returns successfully" Jul 14 22:01:38.166858 containerd[1440]: time="2025-07-14T22:01:38.166812128Z" level=info msg="StartContainer for \"00b8a9e0c0870ecd590dbdd632136c5a2d9bc6a5bd7e407f5c8283f03056ea5f\" returns successfully" Jul 14 22:01:38.176000 containerd[1440]: time="2025-07-14T22:01:38.175092511Z" level=info msg="StartContainer for \"e82402ee7c0d3f354ba2da8946bfdd59594012bb6af46287523f0aa80d5dd517\" returns successfully" Jul 14 22:01:38.227650 kubelet[2178]: E0714 22:01:38.227616 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Jul 14 22:01:38.455613 kubelet[2178]: I0714 22:01:38.452877 2178 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:01:38.851017 kubelet[2178]: E0714 22:01:38.850887 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:38.853251 kubelet[2178]: E0714 22:01:38.853233 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:38.854754 kubelet[2178]: E0714 22:01:38.854706 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:39.856913 kubelet[2178]: E0714 22:01:39.856791 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:39.856913 kubelet[2178]: E0714 22:01:39.856810 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:39.940466 kubelet[2178]: E0714 22:01:39.940390 2178 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 22:01:40.110097 kubelet[2178]: I0714 22:01:40.109536 2178 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:01:40.110097 kubelet[2178]: E0714 22:01:40.109577 2178 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 22:01:40.816003 kubelet[2178]: I0714 22:01:40.815962 2178 apiserver.go:52] "Watching apiserver" Jul 14 22:01:40.825597 kubelet[2178]: I0714 22:01:40.825568 2178 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:01:41.420580 kubelet[2178]: E0714 22:01:41.420525 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:41.858707 kubelet[2178]: E0714 22:01:41.858583 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:42.111215 systemd[1]: Reloading requested from client PID 2454 ('systemctl') (unit session-7.scope)... Jul 14 22:01:42.111232 systemd[1]: Reloading... Jul 14 22:01:42.184491 zram_generator::config[2496]: No configuration found. Jul 14 22:01:42.265088 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:01:42.333047 systemd[1]: Reloading finished in 221 ms. Jul 14 22:01:42.363708 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:01:42.385221 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:01:42.385512 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:01:42.385629 systemd[1]: kubelet.service: Consumed 1.042s CPU time, 129.4M memory peak, 0B memory swap peak. Jul 14 22:01:42.394821 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:01:42.513720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:01:42.518975 (kubelet)[2535]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:01:42.555690 kubelet[2535]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:01:42.555690 kubelet[2535]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:01:42.555690 kubelet[2535]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:01:42.556683 kubelet[2535]: I0714 22:01:42.555735 2535 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:01:42.563218 kubelet[2535]: I0714 22:01:42.563162 2535 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:01:42.563218 kubelet[2535]: I0714 22:01:42.563199 2535 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:01:42.563444 kubelet[2535]: I0714 22:01:42.563415 2535 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:01:42.564737 kubelet[2535]: I0714 22:01:42.564713 2535 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 22:01:42.566594 kubelet[2535]: I0714 22:01:42.566562 2535 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:01:42.570889 kubelet[2535]: E0714 22:01:42.570846 2535 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:01:42.570889 kubelet[2535]: I0714 22:01:42.570876 2535 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:01:42.573162 kubelet[2535]: I0714 22:01:42.573093 2535 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:01:42.573285 kubelet[2535]: I0714 22:01:42.573272 2535 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:01:42.573424 kubelet[2535]: I0714 22:01:42.573385 2535 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:01:42.573592 kubelet[2535]: I0714 22:01:42.573410 2535 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:01:42.573592 kubelet[2535]: I0714 22:01:42.573588 2535 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:01:42.573701 kubelet[2535]: I0714 22:01:42.573597 2535 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:01:42.573701 kubelet[2535]: I0714 22:01:42.573630 2535 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:01:42.573754 kubelet[2535]: I0714 22:01:42.573732 2535 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:01:42.573754 kubelet[2535]: I0714 22:01:42.573743 2535 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:01:42.573797 kubelet[2535]: I0714 22:01:42.573762 2535 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:01:42.573797 kubelet[2535]: I0714 22:01:42.573780 2535 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:01:42.574724 kubelet[2535]: I0714 22:01:42.574263 2535 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:01:42.574724 kubelet[2535]: I0714 22:01:42.574697 2535 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:01:42.575068 kubelet[2535]: I0714 22:01:42.575039 2535 server.go:1274] "Started kubelet" Jul 14 22:01:42.575894 kubelet[2535]: I0714 22:01:42.575850 2535 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:01:42.576176 kubelet[2535]: I0714 22:01:42.576158 2535 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:01:42.576308 kubelet[2535]: I0714 22:01:42.576289 2535 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:01:42.576836 kubelet[2535]: I0714 22:01:42.576804 2535 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:01:42.577250 kubelet[2535]: I0714 22:01:42.577229 2535 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:01:42.578125 kubelet[2535]: I0714 22:01:42.576341 2535 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:01:42.578224 kubelet[2535]: I0714 22:01:42.578200 2535 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:01:42.578312 kubelet[2535]: I0714 22:01:42.578297 2535 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:01:42.578420 kubelet[2535]: I0714 22:01:42.578404 2535 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:01:42.579081 kubelet[2535]: E0714 22:01:42.579049 2535 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:01:42.582884 kubelet[2535]: I0714 22:01:42.582856 2535 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:01:42.582958 kubelet[2535]: I0714 22:01:42.582944 2535 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:01:42.599932 kubelet[2535]: I0714 22:01:42.599902 2535 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:01:42.604156 kubelet[2535]: E0714 22:01:42.604111 2535 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:01:42.605989 kubelet[2535]: I0714 22:01:42.605883 2535 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:01:42.607825 kubelet[2535]: I0714 22:01:42.607802 2535 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:01:42.607825 kubelet[2535]: I0714 22:01:42.607826 2535 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:01:42.607923 kubelet[2535]: I0714 22:01:42.607845 2535 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:01:42.607923 kubelet[2535]: E0714 22:01:42.607902 2535 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:01:42.634703 kubelet[2535]: I0714 22:01:42.634620 2535 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:01:42.634703 kubelet[2535]: I0714 22:01:42.634636 2535 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:01:42.634703 kubelet[2535]: I0714 22:01:42.634653 2535 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:01:42.634827 kubelet[2535]: I0714 22:01:42.634784 2535 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:01:42.634827 kubelet[2535]: I0714 22:01:42.634794 2535 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:01:42.634827 kubelet[2535]: I0714 22:01:42.634809 2535 policy_none.go:49] "None policy: Start" Jul 14 22:01:42.636317 kubelet[2535]: I0714 22:01:42.636091 2535 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:01:42.636317 kubelet[2535]: I0714 22:01:42.636116 2535 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:01:42.636317 kubelet[2535]: I0714 22:01:42.636261 2535 state_mem.go:75] "Updated machine memory state" Jul 14 22:01:42.641953 kubelet[2535]: I0714 22:01:42.641930 2535 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:01:42.642323 kubelet[2535]: I0714 22:01:42.642092 2535 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:01:42.642323 kubelet[2535]: I0714 22:01:42.642112 2535 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:01:42.642421 kubelet[2535]: I0714 22:01:42.642369 2535 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:01:42.714460 kubelet[2535]: E0714 22:01:42.714415 2535 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:01:42.746000 kubelet[2535]: I0714 22:01:42.745961 2535 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:01:42.751518 kubelet[2535]: I0714 22:01:42.751491 2535 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 22:01:42.751600 kubelet[2535]: I0714 22:01:42.751560 2535 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:01:42.879955 kubelet[2535]: I0714 22:01:42.879918 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:42.880332 kubelet[2535]: I0714 22:01:42.880139 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:42.880332 kubelet[2535]: I0714 22:01:42.880180 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/18e5cb05f3310d1172870412c2f76029-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"18e5cb05f3310d1172870412c2f76029\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:01:42.880332 kubelet[2535]: I0714 22:01:42.880200 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/18e5cb05f3310d1172870412c2f76029-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"18e5cb05f3310d1172870412c2f76029\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:01:42.880332 kubelet[2535]: I0714 22:01:42.880220 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:42.880332 kubelet[2535]: I0714 22:01:42.880240 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:42.880484 kubelet[2535]: I0714 22:01:42.880258 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:01:42.880484 kubelet[2535]: I0714 22:01:42.880272 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/18e5cb05f3310d1172870412c2f76029-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"18e5cb05f3310d1172870412c2f76029\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:01:42.880484 kubelet[2535]: I0714 22:01:42.880290 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:43.014017 kubelet[2535]: E0714 22:01:43.013963 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:43.014373 kubelet[2535]: E0714 22:01:43.014304 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:43.014975 kubelet[2535]: E0714 22:01:43.014955 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:43.574086 kubelet[2535]: I0714 22:01:43.574054 2535 apiserver.go:52] "Watching apiserver" Jul 14 22:01:43.578792 kubelet[2535]: I0714 22:01:43.578731 2535 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:01:43.623026 kubelet[2535]: E0714 22:01:43.622961 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:43.627499 kubelet[2535]: E0714 22:01:43.627474 2535 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:01:43.627916 kubelet[2535]: E0714 22:01:43.627853 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:43.627916 kubelet[2535]: E0714 22:01:43.627903 2535 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:01:43.628114 kubelet[2535]: E0714 22:01:43.628018 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:43.647716 kubelet[2535]: I0714 22:01:43.647647 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.64763368 podStartE2EDuration="1.64763368s" podCreationTimestamp="2025-07-14 22:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:01:43.641035942 +0000 UTC m=+1.119022666" watchObservedRunningTime="2025-07-14 22:01:43.64763368 +0000 UTC m=+1.125620404" Jul 14 22:01:43.647812 kubelet[2535]: I0714 22:01:43.647742 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.647738037 podStartE2EDuration="2.647738037s" podCreationTimestamp="2025-07-14 22:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:01:43.647721998 +0000 UTC m=+1.125708722" watchObservedRunningTime="2025-07-14 22:01:43.647738037 +0000 UTC m=+1.125724761" Jul 14 22:01:43.661486 kubelet[2535]: I0714 22:01:43.661424 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6614110210000002 podStartE2EDuration="1.661411021s" podCreationTimestamp="2025-07-14 22:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:01:43.654357654 +0000 UTC m=+1.132344378" watchObservedRunningTime="2025-07-14 22:01:43.661411021 +0000 UTC m=+1.139397745" Jul 14 22:01:44.627001 kubelet[2535]: E0714 22:01:44.626963 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:44.627543 kubelet[2535]: E0714 22:01:44.627394 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:45.628883 kubelet[2535]: E0714 22:01:45.628829 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:47.044530 kubelet[2535]: I0714 22:01:47.044492 2535 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:01:47.045832 containerd[1440]: time="2025-07-14T22:01:47.045748991Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:01:47.046086 kubelet[2535]: I0714 22:01:47.045969 2535 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:01:47.866198 systemd[1]: Created slice kubepods-besteffort-podd0a16443_bbed_420d_ad47_ddec4a814e55.slice - libcontainer container kubepods-besteffort-podd0a16443_bbed_420d_ad47_ddec4a814e55.slice. Jul 14 22:01:47.909872 kubelet[2535]: I0714 22:01:47.909822 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0a16443-bbed-420d-ad47-ddec4a814e55-lib-modules\") pod \"kube-proxy-rs6zp\" (UID: \"d0a16443-bbed-420d-ad47-ddec4a814e55\") " pod="kube-system/kube-proxy-rs6zp" Jul 14 22:01:47.909872 kubelet[2535]: I0714 22:01:47.909866 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d0a16443-bbed-420d-ad47-ddec4a814e55-kube-proxy\") pod \"kube-proxy-rs6zp\" (UID: \"d0a16443-bbed-420d-ad47-ddec4a814e55\") " pod="kube-system/kube-proxy-rs6zp" Jul 14 22:01:47.910039 kubelet[2535]: I0714 22:01:47.909886 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0a16443-bbed-420d-ad47-ddec4a814e55-xtables-lock\") pod \"kube-proxy-rs6zp\" (UID: \"d0a16443-bbed-420d-ad47-ddec4a814e55\") " pod="kube-system/kube-proxy-rs6zp" Jul 14 22:01:47.910039 kubelet[2535]: I0714 22:01:47.909902 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c788\" (UniqueName: \"kubernetes.io/projected/d0a16443-bbed-420d-ad47-ddec4a814e55-kube-api-access-2c788\") pod \"kube-proxy-rs6zp\" (UID: \"d0a16443-bbed-420d-ad47-ddec4a814e55\") " pod="kube-system/kube-proxy-rs6zp" Jul 14 22:01:48.180289 kubelet[2535]: E0714 22:01:48.179955 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:48.181212 containerd[1440]: time="2025-07-14T22:01:48.181128874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rs6zp,Uid:d0a16443-bbed-420d-ad47-ddec4a814e55,Namespace:kube-system,Attempt:0,}" Jul 14 22:01:48.200250 containerd[1440]: time="2025-07-14T22:01:48.200142965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:01:48.200250 containerd[1440]: time="2025-07-14T22:01:48.200218844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:01:48.200250 containerd[1440]: time="2025-07-14T22:01:48.200235363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:01:48.200527 containerd[1440]: time="2025-07-14T22:01:48.200333322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:01:48.221635 systemd[1]: Started cri-containerd-9a8d2d22de13958aaede2ebd91c03b4c5911539b70ab2fc2b04c60b367d8ca04.scope - libcontainer container 9a8d2d22de13958aaede2ebd91c03b4c5911539b70ab2fc2b04c60b367d8ca04. Jul 14 22:01:48.238926 containerd[1440]: time="2025-07-14T22:01:48.238886014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rs6zp,Uid:d0a16443-bbed-420d-ad47-ddec4a814e55,Namespace:kube-system,Attempt:0,} returns sandbox id \"9a8d2d22de13958aaede2ebd91c03b4c5911539b70ab2fc2b04c60b367d8ca04\"" Jul 14 22:01:48.239612 kubelet[2535]: E0714 22:01:48.239593 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:48.241600 containerd[1440]: time="2025-07-14T22:01:48.241563445Z" level=info msg="CreateContainer within sandbox \"9a8d2d22de13958aaede2ebd91c03b4c5911539b70ab2fc2b04c60b367d8ca04\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:01:48.254623 containerd[1440]: time="2025-07-14T22:01:48.254544446Z" level=info msg="CreateContainer within sandbox \"9a8d2d22de13958aaede2ebd91c03b4c5911539b70ab2fc2b04c60b367d8ca04\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9f9115676b257de08c741f46009d333fd568a855c72d753e821db4ed38eeabf6\"" Jul 14 22:01:48.255269 containerd[1440]: time="2025-07-14T22:01:48.255093836Z" level=info msg="StartContainer for \"9f9115676b257de08c741f46009d333fd568a855c72d753e821db4ed38eeabf6\"" Jul 14 22:01:48.278608 systemd[1]: Started cri-containerd-9f9115676b257de08c741f46009d333fd568a855c72d753e821db4ed38eeabf6.scope - libcontainer container 9f9115676b257de08c741f46009d333fd568a855c72d753e821db4ed38eeabf6. Jul 14 22:01:48.304611 containerd[1440]: time="2025-07-14T22:01:48.304018978Z" level=info msg="StartContainer for \"9f9115676b257de08c741f46009d333fd568a855c72d753e821db4ed38eeabf6\" returns successfully" Jul 14 22:01:48.634528 kubelet[2535]: E0714 22:01:48.634444 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:48.698941 kubelet[2535]: E0714 22:01:48.698864 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:48.717777 kubelet[2535]: I0714 22:01:48.717724 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rs6zp" podStartSLOduration=1.717658984 podStartE2EDuration="1.717658984s" podCreationTimestamp="2025-07-14 22:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:01:48.64431741 +0000 UTC m=+6.122304174" watchObservedRunningTime="2025-07-14 22:01:48.717658984 +0000 UTC m=+6.195645708" Jul 14 22:01:49.021412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2076742459.mount: Deactivated successfully. Jul 14 22:01:49.636872 kubelet[2535]: E0714 22:01:49.636841 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:49.896233 kubelet[2535]: E0714 22:01:49.895991 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:50.639418 kubelet[2535]: E0714 22:01:50.638310 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:53.740645 systemd[1]: Created slice kubepods-besteffort-pod2e7fa342_7176_428a_9113_66d5f8ea5989.slice - libcontainer container kubepods-besteffort-pod2e7fa342_7176_428a_9113_66d5f8ea5989.slice. Jul 14 22:01:53.846378 kubelet[2535]: I0714 22:01:53.846303 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2e7fa342-7176-428a-9113-66d5f8ea5989-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-7psvt\" (UID: \"2e7fa342-7176-428a-9113-66d5f8ea5989\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-7psvt" Jul 14 22:01:53.846378 kubelet[2535]: I0714 22:01:53.846347 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr64k\" (UniqueName: \"kubernetes.io/projected/2e7fa342-7176-428a-9113-66d5f8ea5989-kube-api-access-dr64k\") pod \"tigera-operator-5bf8dfcb4-7psvt\" (UID: \"2e7fa342-7176-428a-9113-66d5f8ea5989\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-7psvt" Jul 14 22:01:54.044670 containerd[1440]: time="2025-07-14T22:01:54.044508931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-7psvt,Uid:2e7fa342-7176-428a-9113-66d5f8ea5989,Namespace:tigera-operator,Attempt:0,}" Jul 14 22:01:54.065846 containerd[1440]: time="2025-07-14T22:01:54.065760080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:01:54.065846 containerd[1440]: time="2025-07-14T22:01:54.065815479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:01:54.065846 containerd[1440]: time="2025-07-14T22:01:54.065827279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:01:54.066156 containerd[1440]: time="2025-07-14T22:01:54.065897478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:01:54.086610 systemd[1]: Started cri-containerd-0aceb49d12d444f9d0859be47e85cfea69547e5d8d41d1632817602ce122a762.scope - libcontainer container 0aceb49d12d444f9d0859be47e85cfea69547e5d8d41d1632817602ce122a762. Jul 14 22:01:54.110319 containerd[1440]: time="2025-07-14T22:01:54.110280946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-7psvt,Uid:2e7fa342-7176-428a-9113-66d5f8ea5989,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0aceb49d12d444f9d0859be47e85cfea69547e5d8d41d1632817602ce122a762\"" Jul 14 22:01:54.112288 containerd[1440]: time="2025-07-14T22:01:54.112245435Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 22:01:55.079057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296353109.mount: Deactivated successfully. Jul 14 22:01:55.197372 kubelet[2535]: E0714 22:01:55.197331 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:01:55.477369 containerd[1440]: time="2025-07-14T22:01:55.477251982Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:55.478283 containerd[1440]: time="2025-07-14T22:01:55.477994050Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 14 22:01:55.479253 containerd[1440]: time="2025-07-14T22:01:55.479210192Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:55.481078 containerd[1440]: time="2025-07-14T22:01:55.481026084Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:01:55.481967 containerd[1440]: time="2025-07-14T22:01:55.481921631Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.369640156s" Jul 14 22:01:55.481967 containerd[1440]: time="2025-07-14T22:01:55.481958150Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 14 22:01:55.484271 containerd[1440]: time="2025-07-14T22:01:55.484165877Z" level=info msg="CreateContainer within sandbox \"0aceb49d12d444f9d0859be47e85cfea69547e5d8d41d1632817602ce122a762\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 22:01:55.493509 containerd[1440]: time="2025-07-14T22:01:55.493467855Z" level=info msg="CreateContainer within sandbox \"0aceb49d12d444f9d0859be47e85cfea69547e5d8d41d1632817602ce122a762\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6aabbb844c877586c98995ca358fef0974fce9faef6802cbca47c08396cae625\"" Jul 14 22:01:55.494652 containerd[1440]: time="2025-07-14T22:01:55.494618198Z" level=info msg="StartContainer for \"6aabbb844c877586c98995ca358fef0974fce9faef6802cbca47c08396cae625\"" Jul 14 22:01:55.517602 systemd[1]: Started cri-containerd-6aabbb844c877586c98995ca358fef0974fce9faef6802cbca47c08396cae625.scope - libcontainer container 6aabbb844c877586c98995ca358fef0974fce9faef6802cbca47c08396cae625. Jul 14 22:01:55.552162 containerd[1440]: time="2025-07-14T22:01:55.552108723Z" level=info msg="StartContainer for \"6aabbb844c877586c98995ca358fef0974fce9faef6802cbca47c08396cae625\" returns successfully" Jul 14 22:01:55.658392 kubelet[2535]: I0714 22:01:55.658195 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-7psvt" podStartSLOduration=1.286595663 podStartE2EDuration="2.658164709s" podCreationTimestamp="2025-07-14 22:01:53 +0000 UTC" firstStartedPulling="2025-07-14 22:01:54.111430448 +0000 UTC m=+11.589417172" lastFinishedPulling="2025-07-14 22:01:55.482999494 +0000 UTC m=+12.960986218" observedRunningTime="2025-07-14 22:01:55.657667957 +0000 UTC m=+13.135654681" watchObservedRunningTime="2025-07-14 22:01:55.658164709 +0000 UTC m=+13.136151433" Jul 14 22:02:00.973720 sudo[1619]: pam_unix(sudo:session): session closed for user root Jul 14 22:02:00.981798 sshd[1616]: pam_unix(sshd:session): session closed for user core Jul 14 22:02:00.989653 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:54906.service: Deactivated successfully. Jul 14 22:02:00.992798 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:02:00.992970 systemd[1]: session-7.scope: Consumed 8.106s CPU time, 156.0M memory peak, 0B memory swap peak. Jul 14 22:02:00.994776 systemd-logind[1420]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:02:00.995967 systemd-logind[1420]: Removed session 7. Jul 14 22:02:07.210490 systemd[1]: Created slice kubepods-besteffort-pod0d769861_a78b_4f05_b02a_2fd240daf60c.slice - libcontainer container kubepods-besteffort-pod0d769861_a78b_4f05_b02a_2fd240daf60c.slice. Jul 14 22:02:07.337057 kubelet[2535]: I0714 22:02:07.337014 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d769861-a78b-4f05-b02a-2fd240daf60c-tigera-ca-bundle\") pod \"calico-typha-65458fd77d-95x2h\" (UID: \"0d769861-a78b-4f05-b02a-2fd240daf60c\") " pod="calico-system/calico-typha-65458fd77d-95x2h" Jul 14 22:02:07.337057 kubelet[2535]: I0714 22:02:07.337064 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbns2\" (UniqueName: \"kubernetes.io/projected/0d769861-a78b-4f05-b02a-2fd240daf60c-kube-api-access-hbns2\") pod \"calico-typha-65458fd77d-95x2h\" (UID: \"0d769861-a78b-4f05-b02a-2fd240daf60c\") " pod="calico-system/calico-typha-65458fd77d-95x2h" Jul 14 22:02:07.337485 kubelet[2535]: I0714 22:02:07.337120 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0d769861-a78b-4f05-b02a-2fd240daf60c-typha-certs\") pod \"calico-typha-65458fd77d-95x2h\" (UID: \"0d769861-a78b-4f05-b02a-2fd240daf60c\") " pod="calico-system/calico-typha-65458fd77d-95x2h" Jul 14 22:02:07.502829 systemd[1]: Created slice kubepods-besteffort-pod6228dc15_e4ed_4bb5_9c48_a28a1d5afb58.slice - libcontainer container kubepods-besteffort-pod6228dc15_e4ed_4bb5_9c48_a28a1d5afb58.slice. Jul 14 22:02:07.518640 kubelet[2535]: E0714 22:02:07.518608 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:07.519869 containerd[1440]: time="2025-07-14T22:02:07.519833694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65458fd77d-95x2h,Uid:0d769861-a78b-4f05-b02a-2fd240daf60c,Namespace:calico-system,Attempt:0,}" Jul 14 22:02:07.543896 containerd[1440]: time="2025-07-14T22:02:07.543665682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:02:07.543896 containerd[1440]: time="2025-07-14T22:02:07.543730801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:02:07.543896 containerd[1440]: time="2025-07-14T22:02:07.543741241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:07.543896 containerd[1440]: time="2025-07-14T22:02:07.543817640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:07.562722 systemd[1]: Started cri-containerd-23ea51256345fc1c812a94075abeeae2bcc05ceebaaccbae62193abfde33426f.scope - libcontainer container 23ea51256345fc1c812a94075abeeae2bcc05ceebaaccbae62193abfde33426f. Jul 14 22:02:07.591544 containerd[1440]: time="2025-07-14T22:02:07.591498016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65458fd77d-95x2h,Uid:0d769861-a78b-4f05-b02a-2fd240daf60c,Namespace:calico-system,Attempt:0,} returns sandbox id \"23ea51256345fc1c812a94075abeeae2bcc05ceebaaccbae62193abfde33426f\"" Jul 14 22:02:07.593024 kubelet[2535]: E0714 22:02:07.593001 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:07.598360 containerd[1440]: time="2025-07-14T22:02:07.598331372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 22:02:07.640014 kubelet[2535]: I0714 22:02:07.639981 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-cni-log-dir\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640014 kubelet[2535]: I0714 22:02:07.640017 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-tigera-ca-bundle\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640141 kubelet[2535]: I0714 22:02:07.640042 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-var-lib-calico\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640141 kubelet[2535]: I0714 22:02:07.640072 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-flexvol-driver-host\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640141 kubelet[2535]: I0714 22:02:07.640093 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-lib-modules\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640141 kubelet[2535]: I0714 22:02:07.640117 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-node-certs\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640141 kubelet[2535]: I0714 22:02:07.640139 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-var-run-calico\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640248 kubelet[2535]: I0714 22:02:07.640157 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-cni-net-dir\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640248 kubelet[2535]: I0714 22:02:07.640183 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-cni-bin-dir\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640248 kubelet[2535]: I0714 22:02:07.640201 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-xtables-lock\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640248 kubelet[2535]: I0714 22:02:07.640217 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-policysync\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.640248 kubelet[2535]: I0714 22:02:07.640232 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2wvn\" (UniqueName: \"kubernetes.io/projected/6228dc15-e4ed-4bb5-9c48-a28a1d5afb58-kube-api-access-l2wvn\") pod \"calico-node-mk6p8\" (UID: \"6228dc15-e4ed-4bb5-9c48-a28a1d5afb58\") " pod="calico-system/calico-node-mk6p8" Jul 14 22:02:07.734687 kubelet[2535]: E0714 22:02:07.734637 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbwq6" podUID="88e059ee-2b3a-4b57-8789-ebeef41ce071" Jul 14 22:02:07.747053 kubelet[2535]: E0714 22:02:07.747026 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.747053 kubelet[2535]: W0714 22:02:07.747048 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.750089 kubelet[2535]: E0714 22:02:07.750063 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.751543 kubelet[2535]: E0714 22:02:07.751523 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.751543 kubelet[2535]: W0714 22:02:07.751539 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.751616 kubelet[2535]: E0714 22:02:07.751569 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.751753 kubelet[2535]: E0714 22:02:07.751738 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.751753 kubelet[2535]: W0714 22:02:07.751750 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.751811 kubelet[2535]: E0714 22:02:07.751765 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.751929 kubelet[2535]: E0714 22:02:07.751915 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.751929 kubelet[2535]: W0714 22:02:07.751926 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.751979 kubelet[2535]: E0714 22:02:07.751934 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.752093 kubelet[2535]: E0714 22:02:07.752079 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.752093 kubelet[2535]: W0714 22:02:07.752090 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.752147 kubelet[2535]: E0714 22:02:07.752102 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.752258 kubelet[2535]: E0714 22:02:07.752243 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.752258 kubelet[2535]: W0714 22:02:07.752256 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.752307 kubelet[2535]: E0714 22:02:07.752268 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.752425 kubelet[2535]: E0714 22:02:07.752412 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.752425 kubelet[2535]: W0714 22:02:07.752422 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.752487 kubelet[2535]: E0714 22:02:07.752433 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.752659 kubelet[2535]: E0714 22:02:07.752642 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.752659 kubelet[2535]: W0714 22:02:07.752654 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.752720 kubelet[2535]: E0714 22:02:07.752706 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.752823 kubelet[2535]: E0714 22:02:07.752809 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.752823 kubelet[2535]: W0714 22:02:07.752819 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.752913 kubelet[2535]: E0714 22:02:07.752832 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.753955 kubelet[2535]: E0714 22:02:07.752990 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.753955 kubelet[2535]: W0714 22:02:07.753000 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.753955 kubelet[2535]: E0714 22:02:07.753009 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.753955 kubelet[2535]: E0714 22:02:07.753350 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.753955 kubelet[2535]: W0714 22:02:07.753367 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.753955 kubelet[2535]: E0714 22:02:07.753378 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.754130 kubelet[2535]: E0714 22:02:07.753957 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.754130 kubelet[2535]: W0714 22:02:07.754048 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.754130 kubelet[2535]: E0714 22:02:07.754118 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.754316 kubelet[2535]: E0714 22:02:07.754299 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.754316 kubelet[2535]: W0714 22:02:07.754311 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.754539 kubelet[2535]: E0714 22:02:07.754319 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.754539 kubelet[2535]: E0714 22:02:07.754477 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.754539 kubelet[2535]: W0714 22:02:07.754484 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.754539 kubelet[2535]: E0714 22:02:07.754491 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.754646 kubelet[2535]: E0714 22:02:07.754632 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.754646 kubelet[2535]: W0714 22:02:07.754644 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.754697 kubelet[2535]: E0714 22:02:07.754652 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.754804 kubelet[2535]: E0714 22:02:07.754793 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.754829 kubelet[2535]: W0714 22:02:07.754803 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.754829 kubelet[2535]: E0714 22:02:07.754811 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.754942 kubelet[2535]: E0714 22:02:07.754933 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.754971 kubelet[2535]: W0714 22:02:07.754942 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.754971 kubelet[2535]: E0714 22:02:07.754950 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.755077 kubelet[2535]: E0714 22:02:07.755068 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.755097 kubelet[2535]: W0714 22:02:07.755076 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.755097 kubelet[2535]: E0714 22:02:07.755084 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.755224 kubelet[2535]: E0714 22:02:07.755215 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.755248 kubelet[2535]: W0714 22:02:07.755224 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.755248 kubelet[2535]: E0714 22:02:07.755232 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.755358 kubelet[2535]: E0714 22:02:07.755349 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.755381 kubelet[2535]: W0714 22:02:07.755358 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.755381 kubelet[2535]: E0714 22:02:07.755364 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.756514 kubelet[2535]: E0714 22:02:07.755758 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.756514 kubelet[2535]: W0714 22:02:07.755773 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.756514 kubelet[2535]: E0714 22:02:07.755785 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.756514 kubelet[2535]: E0714 22:02:07.755955 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.756514 kubelet[2535]: W0714 22:02:07.755962 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.756514 kubelet[2535]: E0714 22:02:07.755973 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.806648 containerd[1440]: time="2025-07-14T22:02:07.806607062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mk6p8,Uid:6228dc15-e4ed-4bb5-9c48-a28a1d5afb58,Namespace:calico-system,Attempt:0,}" Jul 14 22:02:07.842293 kubelet[2535]: E0714 22:02:07.842040 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.842293 kubelet[2535]: W0714 22:02:07.842246 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.842293 kubelet[2535]: E0714 22:02:07.842270 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.842293 kubelet[2535]: I0714 22:02:07.842298 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb2g9\" (UniqueName: \"kubernetes.io/projected/88e059ee-2b3a-4b57-8789-ebeef41ce071-kube-api-access-mb2g9\") pod \"csi-node-driver-qbwq6\" (UID: \"88e059ee-2b3a-4b57-8789-ebeef41ce071\") " pod="calico-system/csi-node-driver-qbwq6" Jul 14 22:02:07.842991 kubelet[2535]: E0714 22:02:07.842972 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.843038 kubelet[2535]: W0714 22:02:07.842998 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.843094 kubelet[2535]: E0714 22:02:07.843082 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.843123 kubelet[2535]: I0714 22:02:07.843103 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88e059ee-2b3a-4b57-8789-ebeef41ce071-kubelet-dir\") pod \"csi-node-driver-qbwq6\" (UID: \"88e059ee-2b3a-4b57-8789-ebeef41ce071\") " pod="calico-system/csi-node-driver-qbwq6" Jul 14 22:02:07.843578 kubelet[2535]: E0714 22:02:07.843362 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.843578 kubelet[2535]: W0714 22:02:07.843386 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.843578 kubelet[2535]: E0714 22:02:07.843403 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.843578 kubelet[2535]: I0714 22:02:07.843520 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/88e059ee-2b3a-4b57-8789-ebeef41ce071-varrun\") pod \"csi-node-driver-qbwq6\" (UID: \"88e059ee-2b3a-4b57-8789-ebeef41ce071\") " pod="calico-system/csi-node-driver-qbwq6" Jul 14 22:02:07.844053 kubelet[2535]: E0714 22:02:07.843862 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.844053 kubelet[2535]: W0714 22:02:07.843875 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.844053 kubelet[2535]: E0714 22:02:07.843997 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.844053 kubelet[2535]: I0714 22:02:07.844029 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/88e059ee-2b3a-4b57-8789-ebeef41ce071-registration-dir\") pod \"csi-node-driver-qbwq6\" (UID: \"88e059ee-2b3a-4b57-8789-ebeef41ce071\") " pod="calico-system/csi-node-driver-qbwq6" Jul 14 22:02:07.844230 kubelet[2535]: E0714 22:02:07.844215 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.844230 kubelet[2535]: W0714 22:02:07.844228 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.844652 kubelet[2535]: E0714 22:02:07.844270 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.844652 kubelet[2535]: E0714 22:02:07.844401 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.844652 kubelet[2535]: W0714 22:02:07.844409 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.844652 kubelet[2535]: E0714 22:02:07.844471 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.844652 kubelet[2535]: E0714 22:02:07.844583 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.844652 kubelet[2535]: W0714 22:02:07.844591 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.844652 kubelet[2535]: E0714 22:02:07.844605 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.845017 kubelet[2535]: E0714 22:02:07.844807 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.845017 kubelet[2535]: W0714 22:02:07.844816 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.845017 kubelet[2535]: E0714 22:02:07.844831 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.845017 kubelet[2535]: I0714 22:02:07.844907 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/88e059ee-2b3a-4b57-8789-ebeef41ce071-socket-dir\") pod \"csi-node-driver-qbwq6\" (UID: \"88e059ee-2b3a-4b57-8789-ebeef41ce071\") " pod="calico-system/csi-node-driver-qbwq6" Jul 14 22:02:07.845360 kubelet[2535]: E0714 22:02:07.845247 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.845360 kubelet[2535]: W0714 22:02:07.845263 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.845360 kubelet[2535]: E0714 22:02:07.845281 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.845801 kubelet[2535]: E0714 22:02:07.845508 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.845801 kubelet[2535]: W0714 22:02:07.845520 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.845801 kubelet[2535]: E0714 22:02:07.845530 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.845801 kubelet[2535]: E0714 22:02:07.845728 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.845801 kubelet[2535]: W0714 22:02:07.845739 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.845801 kubelet[2535]: E0714 22:02:07.845776 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.845985 containerd[1440]: time="2025-07-14T22:02:07.845174030Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:02:07.845985 containerd[1440]: time="2025-07-14T22:02:07.845648064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:02:07.845985 containerd[1440]: time="2025-07-14T22:02:07.845667264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:07.845985 containerd[1440]: time="2025-07-14T22:02:07.845759503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:07.846098 kubelet[2535]: E0714 22:02:07.845964 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.846098 kubelet[2535]: W0714 22:02:07.845973 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.846098 kubelet[2535]: E0714 22:02:07.845982 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.846201 kubelet[2535]: E0714 22:02:07.846187 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.846201 kubelet[2535]: W0714 22:02:07.846199 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.846286 kubelet[2535]: E0714 22:02:07.846209 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.846375 kubelet[2535]: E0714 22:02:07.846362 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.846375 kubelet[2535]: W0714 22:02:07.846373 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.846434 kubelet[2535]: E0714 22:02:07.846380 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.846589 kubelet[2535]: E0714 22:02:07.846575 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.846589 kubelet[2535]: W0714 22:02:07.846588 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.846641 kubelet[2535]: E0714 22:02:07.846597 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.863601 systemd[1]: Started cri-containerd-3671fcb0dc2e969e8156f6e67dfc5015d2694805e826cbce7d4031b0010c7465.scope - libcontainer container 3671fcb0dc2e969e8156f6e67dfc5015d2694805e826cbce7d4031b0010c7465. Jul 14 22:02:07.879846 containerd[1440]: time="2025-07-14T22:02:07.879813006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mk6p8,Uid:6228dc15-e4ed-4bb5-9c48-a28a1d5afb58,Namespace:calico-system,Attempt:0,} returns sandbox id \"3671fcb0dc2e969e8156f6e67dfc5015d2694805e826cbce7d4031b0010c7465\"" Jul 14 22:02:07.946733 kubelet[2535]: E0714 22:02:07.946686 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.946733 kubelet[2535]: W0714 22:02:07.946723 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.946733 kubelet[2535]: E0714 22:02:07.946746 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.947449 kubelet[2535]: E0714 22:02:07.947411 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.947449 kubelet[2535]: W0714 22:02:07.947429 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.947449 kubelet[2535]: E0714 22:02:07.947445 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.947837 kubelet[2535]: E0714 22:02:07.947797 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.947837 kubelet[2535]: W0714 22:02:07.947817 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.947837 kubelet[2535]: E0714 22:02:07.947833 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.948308 kubelet[2535]: E0714 22:02:07.948287 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.948308 kubelet[2535]: W0714 22:02:07.948304 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.948520 kubelet[2535]: E0714 22:02:07.948321 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.948729 kubelet[2535]: E0714 22:02:07.948710 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.948802 kubelet[2535]: W0714 22:02:07.948790 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.948925 kubelet[2535]: E0714 22:02:07.948859 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.949142 kubelet[2535]: E0714 22:02:07.949129 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.949465 kubelet[2535]: W0714 22:02:07.949262 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.949465 kubelet[2535]: E0714 22:02:07.949302 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.949621 kubelet[2535]: E0714 22:02:07.949606 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.949677 kubelet[2535]: W0714 22:02:07.949667 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.949804 kubelet[2535]: E0714 22:02:07.949743 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.950002 kubelet[2535]: E0714 22:02:07.949987 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.950072 kubelet[2535]: W0714 22:02:07.950059 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.950202 kubelet[2535]: E0714 22:02:07.950136 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.950355 kubelet[2535]: E0714 22:02:07.950312 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.950423 kubelet[2535]: W0714 22:02:07.950411 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.950617 kubelet[2535]: E0714 22:02:07.950494 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.950833 kubelet[2535]: E0714 22:02:07.950736 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.950833 kubelet[2535]: W0714 22:02:07.950752 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.950833 kubelet[2535]: E0714 22:02:07.950789 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.951127 kubelet[2535]: E0714 22:02:07.951112 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.951199 kubelet[2535]: W0714 22:02:07.951186 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.951400 kubelet[2535]: E0714 22:02:07.951304 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.951607 kubelet[2535]: E0714 22:02:07.951511 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.951607 kubelet[2535]: W0714 22:02:07.951525 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.951607 kubelet[2535]: E0714 22:02:07.951559 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.952447 kubelet[2535]: E0714 22:02:07.951800 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.952447 kubelet[2535]: W0714 22:02:07.952339 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.952447 kubelet[2535]: E0714 22:02:07.952427 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.952593 kubelet[2535]: E0714 22:02:07.952577 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.952593 kubelet[2535]: W0714 22:02:07.952590 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.952665 kubelet[2535]: E0714 22:02:07.952646 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.952892 kubelet[2535]: E0714 22:02:07.952863 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.952892 kubelet[2535]: W0714 22:02:07.952879 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.952942 kubelet[2535]: E0714 22:02:07.952913 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.953513 kubelet[2535]: E0714 22:02:07.953475 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.953513 kubelet[2535]: W0714 22:02:07.953494 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.953604 kubelet[2535]: E0714 22:02:07.953567 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.954262 kubelet[2535]: E0714 22:02:07.954240 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.954262 kubelet[2535]: W0714 22:02:07.954257 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.954355 kubelet[2535]: E0714 22:02:07.954295 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.954604 kubelet[2535]: E0714 22:02:07.954534 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.954604 kubelet[2535]: W0714 22:02:07.954550 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.954604 kubelet[2535]: E0714 22:02:07.954578 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.955255 kubelet[2535]: E0714 22:02:07.955196 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.955255 kubelet[2535]: W0714 22:02:07.955212 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.955255 kubelet[2535]: E0714 22:02:07.955241 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.956121 kubelet[2535]: E0714 22:02:07.955484 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.956121 kubelet[2535]: W0714 22:02:07.955502 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.956121 kubelet[2535]: E0714 22:02:07.955971 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.965889 kubelet[2535]: E0714 22:02:07.965852 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.965889 kubelet[2535]: W0714 22:02:07.965869 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.966860 kubelet[2535]: E0714 22:02:07.966076 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.966860 kubelet[2535]: E0714 22:02:07.966098 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.966860 kubelet[2535]: W0714 22:02:07.966109 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.966860 kubelet[2535]: E0714 22:02:07.966134 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.966860 kubelet[2535]: E0714 22:02:07.966292 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.966860 kubelet[2535]: W0714 22:02:07.966300 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.966860 kubelet[2535]: E0714 22:02:07.966332 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.966860 kubelet[2535]: E0714 22:02:07.966611 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.966860 kubelet[2535]: W0714 22:02:07.966622 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.966860 kubelet[2535]: E0714 22:02:07.966638 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.967078 kubelet[2535]: E0714 22:02:07.966870 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.967078 kubelet[2535]: W0714 22:02:07.966880 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.967078 kubelet[2535]: E0714 22:02:07.966894 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:07.967078 kubelet[2535]: E0714 22:02:07.967068 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:07.967078 kubelet[2535]: W0714 22:02:07.967076 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:07.967173 kubelet[2535]: E0714 22:02:07.967085 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:08.769369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2310783835.mount: Deactivated successfully. Jul 14 22:02:09.387820 containerd[1440]: time="2025-07-14T22:02:09.387770098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:09.388751 containerd[1440]: time="2025-07-14T22:02:09.388715607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 14 22:02:09.389614 containerd[1440]: time="2025-07-14T22:02:09.389586436Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:09.392027 containerd[1440]: time="2025-07-14T22:02:09.391990608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:09.392786 containerd[1440]: time="2025-07-14T22:02:09.392751358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.794383747s" Jul 14 22:02:09.392843 containerd[1440]: time="2025-07-14T22:02:09.392797518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 14 22:02:09.398873 containerd[1440]: time="2025-07-14T22:02:09.398835966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 22:02:09.414167 containerd[1440]: time="2025-07-14T22:02:09.414121943Z" level=info msg="CreateContainer within sandbox \"23ea51256345fc1c812a94075abeeae2bcc05ceebaaccbae62193abfde33426f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 22:02:09.425398 containerd[1440]: time="2025-07-14T22:02:09.425355649Z" level=info msg="CreateContainer within sandbox \"23ea51256345fc1c812a94075abeeae2bcc05ceebaaccbae62193abfde33426f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c7834a177397bb0f885aa615606989922f5709e86c04be731da8197f2de5b8ab\"" Jul 14 22:02:09.425799 containerd[1440]: time="2025-07-14T22:02:09.425777404Z" level=info msg="StartContainer for \"c7834a177397bb0f885aa615606989922f5709e86c04be731da8197f2de5b8ab\"" Jul 14 22:02:09.449733 systemd[1]: Started cri-containerd-c7834a177397bb0f885aa615606989922f5709e86c04be731da8197f2de5b8ab.scope - libcontainer container c7834a177397bb0f885aa615606989922f5709e86c04be731da8197f2de5b8ab. Jul 14 22:02:09.481691 containerd[1440]: time="2025-07-14T22:02:09.481355021Z" level=info msg="StartContainer for \"c7834a177397bb0f885aa615606989922f5709e86c04be731da8197f2de5b8ab\" returns successfully" Jul 14 22:02:09.611483 kubelet[2535]: E0714 22:02:09.608894 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbwq6" podUID="88e059ee-2b3a-4b57-8789-ebeef41ce071" Jul 14 22:02:09.702011 kubelet[2535]: E0714 22:02:09.701900 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:09.715112 kubelet[2535]: I0714 22:02:09.714564 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65458fd77d-95x2h" podStartSLOduration=0.909359461 podStartE2EDuration="2.714547317s" podCreationTimestamp="2025-07-14 22:02:07 +0000 UTC" firstStartedPulling="2025-07-14 22:02:07.593417873 +0000 UTC m=+25.071404597" lastFinishedPulling="2025-07-14 22:02:09.398605729 +0000 UTC m=+26.876592453" observedRunningTime="2025-07-14 22:02:09.714342039 +0000 UTC m=+27.192328763" watchObservedRunningTime="2025-07-14 22:02:09.714547317 +0000 UTC m=+27.192534041" Jul 14 22:02:09.772832 kubelet[2535]: E0714 22:02:09.772793 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.772832 kubelet[2535]: W0714 22:02:09.772816 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.772832 kubelet[2535]: E0714 22:02:09.772837 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.773022 kubelet[2535]: E0714 22:02:09.772979 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.773022 kubelet[2535]: W0714 22:02:09.772986 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.773022 kubelet[2535]: E0714 22:02:09.772994 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.773152 kubelet[2535]: E0714 22:02:09.773130 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.773152 kubelet[2535]: W0714 22:02:09.773141 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.773152 kubelet[2535]: E0714 22:02:09.773149 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.773302 kubelet[2535]: E0714 22:02:09.773283 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.773302 kubelet[2535]: W0714 22:02:09.773294 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.773302 kubelet[2535]: E0714 22:02:09.773302 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.773486 kubelet[2535]: E0714 22:02:09.773469 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.773486 kubelet[2535]: W0714 22:02:09.773484 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.773544 kubelet[2535]: E0714 22:02:09.773503 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.773673 kubelet[2535]: E0714 22:02:09.773648 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.773673 kubelet[2535]: W0714 22:02:09.773660 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.773726 kubelet[2535]: E0714 22:02:09.773676 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.773830 kubelet[2535]: E0714 22:02:09.773818 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.773857 kubelet[2535]: W0714 22:02:09.773830 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.773857 kubelet[2535]: E0714 22:02:09.773838 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.773979 kubelet[2535]: E0714 22:02:09.773969 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.774005 kubelet[2535]: W0714 22:02:09.773979 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.774005 kubelet[2535]: E0714 22:02:09.773987 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.774144 kubelet[2535]: E0714 22:02:09.774125 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.774144 kubelet[2535]: W0714 22:02:09.774135 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.774144 kubelet[2535]: E0714 22:02:09.774143 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.774278 kubelet[2535]: E0714 22:02:09.774268 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.774278 kubelet[2535]: W0714 22:02:09.774278 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.774327 kubelet[2535]: E0714 22:02:09.774288 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.774432 kubelet[2535]: E0714 22:02:09.774422 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.774465 kubelet[2535]: W0714 22:02:09.774432 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.774465 kubelet[2535]: E0714 22:02:09.774439 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.774603 kubelet[2535]: E0714 22:02:09.774591 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.774635 kubelet[2535]: W0714 22:02:09.774604 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.774635 kubelet[2535]: E0714 22:02:09.774613 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.774794 kubelet[2535]: E0714 22:02:09.774782 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.774820 kubelet[2535]: W0714 22:02:09.774794 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.774820 kubelet[2535]: E0714 22:02:09.774804 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.774960 kubelet[2535]: E0714 22:02:09.774943 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.774986 kubelet[2535]: W0714 22:02:09.774960 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.774986 kubelet[2535]: E0714 22:02:09.774969 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.775121 kubelet[2535]: E0714 22:02:09.775105 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.775146 kubelet[2535]: W0714 22:02:09.775121 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.775146 kubelet[2535]: E0714 22:02:09.775131 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.864304 kubelet[2535]: E0714 22:02:09.864268 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.864304 kubelet[2535]: W0714 22:02:09.864294 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.864304 kubelet[2535]: E0714 22:02:09.864314 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.867123 kubelet[2535]: E0714 22:02:09.865107 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.867123 kubelet[2535]: W0714 22:02:09.865123 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.867123 kubelet[2535]: E0714 22:02:09.865142 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.867123 kubelet[2535]: E0714 22:02:09.865689 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.867123 kubelet[2535]: W0714 22:02:09.865700 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.867123 kubelet[2535]: E0714 22:02:09.865724 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.867123 kubelet[2535]: E0714 22:02:09.866500 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.867123 kubelet[2535]: W0714 22:02:09.866511 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.867123 kubelet[2535]: E0714 22:02:09.866542 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.867123 kubelet[2535]: E0714 22:02:09.866829 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.867392 kubelet[2535]: W0714 22:02:09.866878 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.867392 kubelet[2535]: E0714 22:02:09.866958 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.867392 kubelet[2535]: E0714 22:02:09.867352 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.867392 kubelet[2535]: W0714 22:02:09.867364 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.867501 kubelet[2535]: E0714 22:02:09.867445 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.867622 kubelet[2535]: E0714 22:02:09.867593 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.867622 kubelet[2535]: W0714 22:02:09.867607 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.867622 kubelet[2535]: E0714 22:02:09.867621 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.867793 kubelet[2535]: E0714 22:02:09.867773 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.867793 kubelet[2535]: W0714 22:02:09.867784 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.867793 kubelet[2535]: E0714 22:02:09.867793 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.867998 kubelet[2535]: E0714 22:02:09.867954 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.867998 kubelet[2535]: W0714 22:02:09.867992 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.868052 kubelet[2535]: E0714 22:02:09.868006 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.868256 kubelet[2535]: E0714 22:02:09.868232 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.868256 kubelet[2535]: W0714 22:02:09.868247 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.868307 kubelet[2535]: E0714 22:02:09.868263 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.868416 kubelet[2535]: E0714 22:02:09.868407 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.868439 kubelet[2535]: W0714 22:02:09.868416 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.868439 kubelet[2535]: E0714 22:02:09.868428 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.868624 kubelet[2535]: E0714 22:02:09.868611 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.868653 kubelet[2535]: W0714 22:02:09.868627 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.868653 kubelet[2535]: E0714 22:02:09.868642 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.868923 kubelet[2535]: E0714 22:02:09.868904 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.868923 kubelet[2535]: W0714 22:02:09.868920 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.868986 kubelet[2535]: E0714 22:02:09.868938 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.869134 kubelet[2535]: E0714 22:02:09.869123 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.869134 kubelet[2535]: W0714 22:02:09.869133 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.869177 kubelet[2535]: E0714 22:02:09.869146 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.869320 kubelet[2535]: E0714 22:02:09.869306 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.869346 kubelet[2535]: W0714 22:02:09.869324 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.869346 kubelet[2535]: E0714 22:02:09.869339 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.869531 kubelet[2535]: E0714 22:02:09.869518 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.869531 kubelet[2535]: W0714 22:02:09.869529 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.869598 kubelet[2535]: E0714 22:02:09.869538 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.869717 kubelet[2535]: E0714 22:02:09.869705 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.869717 kubelet[2535]: W0714 22:02:09.869715 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.869770 kubelet[2535]: E0714 22:02:09.869723 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:09.870102 kubelet[2535]: E0714 22:02:09.870076 2535 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:02:09.870102 kubelet[2535]: W0714 22:02:09.870091 2535 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:02:09.870102 kubelet[2535]: E0714 22:02:09.870101 2535 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:02:10.424025 containerd[1440]: time="2025-07-14T22:02:10.423193676Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:10.424025 containerd[1440]: time="2025-07-14T22:02:10.423988947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 14 22:02:10.424528 containerd[1440]: time="2025-07-14T22:02:10.424492941Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:10.426275 containerd[1440]: time="2025-07-14T22:02:10.426235000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:10.426901 containerd[1440]: time="2025-07-14T22:02:10.426870993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.027992308s" Jul 14 22:02:10.426946 containerd[1440]: time="2025-07-14T22:02:10.426906872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 14 22:02:10.429401 containerd[1440]: time="2025-07-14T22:02:10.429012728Z" level=info msg="CreateContainer within sandbox \"3671fcb0dc2e969e8156f6e67dfc5015d2694805e826cbce7d4031b0010c7465\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 22:02:10.442136 containerd[1440]: time="2025-07-14T22:02:10.442065134Z" level=info msg="CreateContainer within sandbox \"3671fcb0dc2e969e8156f6e67dfc5015d2694805e826cbce7d4031b0010c7465\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a613716c5fe56dd8e9bcb0e12e72aaa2a1c3308e4516b4ac40a6ac558681a27c\"" Jul 14 22:02:10.442577 containerd[1440]: time="2025-07-14T22:02:10.442537328Z" level=info msg="StartContainer for \"a613716c5fe56dd8e9bcb0e12e72aaa2a1c3308e4516b4ac40a6ac558681a27c\"" Jul 14 22:02:10.474621 systemd[1]: Started cri-containerd-a613716c5fe56dd8e9bcb0e12e72aaa2a1c3308e4516b4ac40a6ac558681a27c.scope - libcontainer container a613716c5fe56dd8e9bcb0e12e72aaa2a1c3308e4516b4ac40a6ac558681a27c. Jul 14 22:02:10.502132 containerd[1440]: time="2025-07-14T22:02:10.502085906Z" level=info msg="StartContainer for \"a613716c5fe56dd8e9bcb0e12e72aaa2a1c3308e4516b4ac40a6ac558681a27c\" returns successfully" Jul 14 22:02:10.539107 systemd[1]: cri-containerd-a613716c5fe56dd8e9bcb0e12e72aaa2a1c3308e4516b4ac40a6ac558681a27c.scope: Deactivated successfully. Jul 14 22:02:10.566808 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a613716c5fe56dd8e9bcb0e12e72aaa2a1c3308e4516b4ac40a6ac558681a27c-rootfs.mount: Deactivated successfully. Jul 14 22:02:10.589373 containerd[1440]: time="2025-07-14T22:02:10.581394210Z" level=info msg="shim disconnected" id=a613716c5fe56dd8e9bcb0e12e72aaa2a1c3308e4516b4ac40a6ac558681a27c namespace=k8s.io Jul 14 22:02:10.589373 containerd[1440]: time="2025-07-14T22:02:10.589367636Z" level=warning msg="cleaning up after shim disconnected" id=a613716c5fe56dd8e9bcb0e12e72aaa2a1c3308e4516b4ac40a6ac558681a27c namespace=k8s.io Jul 14 22:02:10.589373 containerd[1440]: time="2025-07-14T22:02:10.589380756Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:02:10.709294 kubelet[2535]: I0714 22:02:10.708974 2535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:02:10.710759 kubelet[2535]: E0714 22:02:10.710741 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:10.711032 containerd[1440]: time="2025-07-14T22:02:10.710986721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 22:02:11.608918 kubelet[2535]: E0714 22:02:11.608857 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbwq6" podUID="88e059ee-2b3a-4b57-8789-ebeef41ce071" Jul 14 22:02:13.044934 containerd[1440]: time="2025-07-14T22:02:13.044888145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:13.046147 containerd[1440]: time="2025-07-14T22:02:13.046115451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 14 22:02:13.046845 containerd[1440]: time="2025-07-14T22:02:13.046782123Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:13.048760 containerd[1440]: time="2025-07-14T22:02:13.048712821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:13.049568 containerd[1440]: time="2025-07-14T22:02:13.049466613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.338418213s" Jul 14 22:02:13.049568 containerd[1440]: time="2025-07-14T22:02:13.049496092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 14 22:02:13.052249 containerd[1440]: time="2025-07-14T22:02:13.052215461Z" level=info msg="CreateContainer within sandbox \"3671fcb0dc2e969e8156f6e67dfc5015d2694805e826cbce7d4031b0010c7465\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 22:02:13.066861 containerd[1440]: time="2025-07-14T22:02:13.066700816Z" level=info msg="CreateContainer within sandbox \"3671fcb0dc2e969e8156f6e67dfc5015d2694805e826cbce7d4031b0010c7465\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d177541c22c8cd0556cbab3dfc75b9b0f6fbcd4306b8cc85b8dd33a7572205a3\"" Jul 14 22:02:13.068116 containerd[1440]: time="2025-07-14T22:02:13.068084920Z" level=info msg="StartContainer for \"d177541c22c8cd0556cbab3dfc75b9b0f6fbcd4306b8cc85b8dd33a7572205a3\"" Jul 14 22:02:13.097627 systemd[1]: Started cri-containerd-d177541c22c8cd0556cbab3dfc75b9b0f6fbcd4306b8cc85b8dd33a7572205a3.scope - libcontainer container d177541c22c8cd0556cbab3dfc75b9b0f6fbcd4306b8cc85b8dd33a7572205a3. Jul 14 22:02:13.122485 containerd[1440]: time="2025-07-14T22:02:13.122328900Z" level=info msg="StartContainer for \"d177541c22c8cd0556cbab3dfc75b9b0f6fbcd4306b8cc85b8dd33a7572205a3\" returns successfully" Jul 14 22:02:13.608310 kubelet[2535]: E0714 22:02:13.608244 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qbwq6" podUID="88e059ee-2b3a-4b57-8789-ebeef41ce071" Jul 14 22:02:13.710584 systemd[1]: cri-containerd-d177541c22c8cd0556cbab3dfc75b9b0f6fbcd4306b8cc85b8dd33a7572205a3.scope: Deactivated successfully. Jul 14 22:02:13.744898 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d177541c22c8cd0556cbab3dfc75b9b0f6fbcd4306b8cc85b8dd33a7572205a3-rootfs.mount: Deactivated successfully. Jul 14 22:02:13.748168 containerd[1440]: time="2025-07-14T22:02:13.748117907Z" level=info msg="shim disconnected" id=d177541c22c8cd0556cbab3dfc75b9b0f6fbcd4306b8cc85b8dd33a7572205a3 namespace=k8s.io Jul 14 22:02:13.748168 containerd[1440]: time="2025-07-14T22:02:13.748167267Z" level=warning msg="cleaning up after shim disconnected" id=d177541c22c8cd0556cbab3dfc75b9b0f6fbcd4306b8cc85b8dd33a7572205a3 namespace=k8s.io Jul 14 22:02:13.748168 containerd[1440]: time="2025-07-14T22:02:13.748175186Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:02:13.753558 kubelet[2535]: I0714 22:02:13.752986 2535 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 22:02:13.796776 systemd[1]: Created slice kubepods-burstable-pod07dad9bf_62f6_44ab_88b7_926fd88e9c73.slice - libcontainer container kubepods-burstable-pod07dad9bf_62f6_44ab_88b7_926fd88e9c73.slice. Jul 14 22:02:13.807270 systemd[1]: Created slice kubepods-burstable-pod7c8f785c_fa60_472e_a6e1_a21274af8925.slice - libcontainer container kubepods-burstable-pod7c8f785c_fa60_472e_a6e1_a21274af8925.slice. Jul 14 22:02:13.819753 systemd[1]: Created slice kubepods-besteffort-podb98ca949_7af6_44b4_b15a_f51c51b97182.slice - libcontainer container kubepods-besteffort-podb98ca949_7af6_44b4_b15a_f51c51b97182.slice. Jul 14 22:02:13.824663 systemd[1]: Created slice kubepods-besteffort-pod69b2a205_08a9_48bb_b9c3_874e85d81984.slice - libcontainer container kubepods-besteffort-pod69b2a205_08a9_48bb_b9c3_874e85d81984.slice. Jul 14 22:02:13.830378 systemd[1]: Created slice kubepods-besteffort-poddb3be38c_70ff_4df0_a2d5_d0462c499962.slice - libcontainer container kubepods-besteffort-poddb3be38c_70ff_4df0_a2d5_d0462c499962.slice. Jul 14 22:02:13.836426 systemd[1]: Created slice kubepods-besteffort-pod3f90d3d1_992d_4417_b4aa_7efb36d87df3.slice - libcontainer container kubepods-besteffort-pod3f90d3d1_992d_4417_b4aa_7efb36d87df3.slice. Jul 14 22:02:13.841371 systemd[1]: Created slice kubepods-besteffort-podd2664461_d898_4ff2_850a_8e3d73709f9a.slice - libcontainer container kubepods-besteffort-podd2664461_d898_4ff2_850a_8e3d73709f9a.slice. Jul 14 22:02:13.908790 kubelet[2535]: I0714 22:02:13.908673 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d2664461-d898-4ff2-850a-8e3d73709f9a-goldmane-key-pair\") pod \"goldmane-58fd7646b9-66djn\" (UID: \"d2664461-d898-4ff2-850a-8e3d73709f9a\") " pod="calico-system/goldmane-58fd7646b9-66djn" Jul 14 22:02:13.908790 kubelet[2535]: I0714 22:02:13.908720 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/db3be38c-70ff-4df0-a2d5-d0462c499962-calico-apiserver-certs\") pod \"calico-apiserver-687547dbff-8vw7r\" (UID: \"db3be38c-70ff-4df0-a2d5-d0462c499962\") " pod="calico-apiserver/calico-apiserver-687547dbff-8vw7r" Jul 14 22:02:13.908790 kubelet[2535]: I0714 22:02:13.908739 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c8f785c-fa60-472e-a6e1-a21274af8925-config-volume\") pod \"coredns-7c65d6cfc9-ss2ss\" (UID: \"7c8f785c-fa60-472e-a6e1-a21274af8925\") " pod="kube-system/coredns-7c65d6cfc9-ss2ss" Jul 14 22:02:13.908790 kubelet[2535]: I0714 22:02:13.908756 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07dad9bf-62f6-44ab-88b7-926fd88e9c73-config-volume\") pod \"coredns-7c65d6cfc9-wftww\" (UID: \"07dad9bf-62f6-44ab-88b7-926fd88e9c73\") " pod="kube-system/coredns-7c65d6cfc9-wftww" Jul 14 22:02:13.908790 kubelet[2535]: I0714 22:02:13.908772 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2664461-d898-4ff2-850a-8e3d73709f9a-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-66djn\" (UID: \"d2664461-d898-4ff2-850a-8e3d73709f9a\") " pod="calico-system/goldmane-58fd7646b9-66djn" Jul 14 22:02:13.908987 kubelet[2535]: I0714 22:02:13.908789 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqcgl\" (UniqueName: \"kubernetes.io/projected/db3be38c-70ff-4df0-a2d5-d0462c499962-kube-api-access-cqcgl\") pod \"calico-apiserver-687547dbff-8vw7r\" (UID: \"db3be38c-70ff-4df0-a2d5-d0462c499962\") " pod="calico-apiserver/calico-apiserver-687547dbff-8vw7r" Jul 14 22:02:13.908987 kubelet[2535]: I0714 22:02:13.908804 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qnjh\" (UniqueName: \"kubernetes.io/projected/69b2a205-08a9-48bb-b9c3-874e85d81984-kube-api-access-7qnjh\") pod \"calico-kube-controllers-568d8c6dc9-hlgkn\" (UID: \"69b2a205-08a9-48bb-b9c3-874e85d81984\") " pod="calico-system/calico-kube-controllers-568d8c6dc9-hlgkn" Jul 14 22:02:13.908987 kubelet[2535]: I0714 22:02:13.908823 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4hhw\" (UniqueName: \"kubernetes.io/projected/07dad9bf-62f6-44ab-88b7-926fd88e9c73-kube-api-access-t4hhw\") pod \"coredns-7c65d6cfc9-wftww\" (UID: \"07dad9bf-62f6-44ab-88b7-926fd88e9c73\") " pod="kube-system/coredns-7c65d6cfc9-wftww" Jul 14 22:02:13.908987 kubelet[2535]: I0714 22:02:13.908865 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjlg\" (UniqueName: \"kubernetes.io/projected/b98ca949-7af6-44b4-b15a-f51c51b97182-kube-api-access-ljjlg\") pod \"calico-apiserver-687547dbff-nxmhz\" (UID: \"b98ca949-7af6-44b4-b15a-f51c51b97182\") " pod="calico-apiserver/calico-apiserver-687547dbff-nxmhz" Jul 14 22:02:13.908987 kubelet[2535]: I0714 22:02:13.908884 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69b2a205-08a9-48bb-b9c3-874e85d81984-tigera-ca-bundle\") pod \"calico-kube-controllers-568d8c6dc9-hlgkn\" (UID: \"69b2a205-08a9-48bb-b9c3-874e85d81984\") " pod="calico-system/calico-kube-controllers-568d8c6dc9-hlgkn" Jul 14 22:02:13.909095 kubelet[2535]: I0714 22:02:13.908898 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlbzd\" (UniqueName: \"kubernetes.io/projected/3f90d3d1-992d-4417-b4aa-7efb36d87df3-kube-api-access-rlbzd\") pod \"whisker-78b4f5fbc4-jkv4w\" (UID: \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\") " pod="calico-system/whisker-78b4f5fbc4-jkv4w" Jul 14 22:02:13.909095 kubelet[2535]: I0714 22:02:13.908913 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b98ca949-7af6-44b4-b15a-f51c51b97182-calico-apiserver-certs\") pod \"calico-apiserver-687547dbff-nxmhz\" (UID: \"b98ca949-7af6-44b4-b15a-f51c51b97182\") " pod="calico-apiserver/calico-apiserver-687547dbff-nxmhz" Jul 14 22:02:13.909095 kubelet[2535]: I0714 22:02:13.908979 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k6fn\" (UniqueName: \"kubernetes.io/projected/d2664461-d898-4ff2-850a-8e3d73709f9a-kube-api-access-7k6fn\") pod \"goldmane-58fd7646b9-66djn\" (UID: \"d2664461-d898-4ff2-850a-8e3d73709f9a\") " pod="calico-system/goldmane-58fd7646b9-66djn" Jul 14 22:02:13.909095 kubelet[2535]: I0714 22:02:13.909018 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f90d3d1-992d-4417-b4aa-7efb36d87df3-whisker-ca-bundle\") pod \"whisker-78b4f5fbc4-jkv4w\" (UID: \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\") " pod="calico-system/whisker-78b4f5fbc4-jkv4w" Jul 14 22:02:13.909095 kubelet[2535]: I0714 22:02:13.909037 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkftl\" (UniqueName: \"kubernetes.io/projected/7c8f785c-fa60-472e-a6e1-a21274af8925-kube-api-access-bkftl\") pod \"coredns-7c65d6cfc9-ss2ss\" (UID: \"7c8f785c-fa60-472e-a6e1-a21274af8925\") " pod="kube-system/coredns-7c65d6cfc9-ss2ss" Jul 14 22:02:13.909199 kubelet[2535]: I0714 22:02:13.909054 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2664461-d898-4ff2-850a-8e3d73709f9a-config\") pod \"goldmane-58fd7646b9-66djn\" (UID: \"d2664461-d898-4ff2-850a-8e3d73709f9a\") " pod="calico-system/goldmane-58fd7646b9-66djn" Jul 14 22:02:13.909199 kubelet[2535]: I0714 22:02:13.909071 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f90d3d1-992d-4417-b4aa-7efb36d87df3-whisker-backend-key-pair\") pod \"whisker-78b4f5fbc4-jkv4w\" (UID: \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\") " pod="calico-system/whisker-78b4f5fbc4-jkv4w" Jul 14 22:02:14.111131 kubelet[2535]: E0714 22:02:14.111074 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:14.111691 kubelet[2535]: E0714 22:02:14.111477 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:14.111762 containerd[1440]: time="2025-07-14T22:02:14.111550485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ss2ss,Uid:7c8f785c-fa60-472e-a6e1-a21274af8925,Namespace:kube-system,Attempt:0,}" Jul 14 22:02:14.112001 containerd[1440]: time="2025-07-14T22:02:14.111764402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wftww,Uid:07dad9bf-62f6-44ab-88b7-926fd88e9c73,Namespace:kube-system,Attempt:0,}" Jul 14 22:02:14.124476 containerd[1440]: time="2025-07-14T22:02:14.123447030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687547dbff-nxmhz,Uid:b98ca949-7af6-44b4-b15a-f51c51b97182,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:02:14.128049 containerd[1440]: time="2025-07-14T22:02:14.128005538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-568d8c6dc9-hlgkn,Uid:69b2a205-08a9-48bb-b9c3-874e85d81984,Namespace:calico-system,Attempt:0,}" Jul 14 22:02:14.134575 containerd[1440]: time="2025-07-14T22:02:14.134538624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687547dbff-8vw7r,Uid:db3be38c-70ff-4df0-a2d5-d0462c499962,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:02:14.139368 containerd[1440]: time="2025-07-14T22:02:14.139341210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b4f5fbc4-jkv4w,Uid:3f90d3d1-992d-4417-b4aa-7efb36d87df3,Namespace:calico-system,Attempt:0,}" Jul 14 22:02:14.163776 containerd[1440]: time="2025-07-14T22:02:14.161510119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-66djn,Uid:d2664461-d898-4ff2-850a-8e3d73709f9a,Namespace:calico-system,Attempt:0,}" Jul 14 22:02:14.551663 containerd[1440]: time="2025-07-14T22:02:14.551608462Z" level=error msg="Failed to destroy network for sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.552046 containerd[1440]: time="2025-07-14T22:02:14.551942498Z" level=error msg="encountered an error cleaning up failed sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.552046 containerd[1440]: time="2025-07-14T22:02:14.551992458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-568d8c6dc9-hlgkn,Uid:69b2a205-08a9-48bb-b9c3-874e85d81984,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.554792 kubelet[2535]: E0714 22:02:14.554740 2535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.556555 kubelet[2535]: E0714 22:02:14.556459 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-568d8c6dc9-hlgkn" Jul 14 22:02:14.556555 kubelet[2535]: E0714 22:02:14.556508 2535 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-568d8c6dc9-hlgkn" Jul 14 22:02:14.556750 kubelet[2535]: E0714 22:02:14.556556 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-568d8c6dc9-hlgkn_calico-system(69b2a205-08a9-48bb-b9c3-874e85d81984)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-568d8c6dc9-hlgkn_calico-system(69b2a205-08a9-48bb-b9c3-874e85d81984)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-568d8c6dc9-hlgkn" podUID="69b2a205-08a9-48bb-b9c3-874e85d81984" Jul 14 22:02:14.564136 containerd[1440]: time="2025-07-14T22:02:14.563579927Z" level=error msg="Failed to destroy network for sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.564136 containerd[1440]: time="2025-07-14T22:02:14.563652046Z" level=error msg="Failed to destroy network for sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.564514 containerd[1440]: time="2025-07-14T22:02:14.563971202Z" level=error msg="encountered an error cleaning up failed sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.564643 containerd[1440]: time="2025-07-14T22:02:14.564540316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687547dbff-nxmhz,Uid:b98ca949-7af6-44b4-b15a-f51c51b97182,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.564718 containerd[1440]: time="2025-07-14T22:02:14.564093161Z" level=error msg="encountered an error cleaning up failed sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.564751 containerd[1440]: time="2025-07-14T22:02:14.564737394Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wftww,Uid:07dad9bf-62f6-44ab-88b7-926fd88e9c73,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.564912 kubelet[2535]: E0714 22:02:14.564884 2535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.564953 kubelet[2535]: E0714 22:02:14.564937 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wftww" Jul 14 22:02:14.564978 kubelet[2535]: E0714 22:02:14.564957 2535 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wftww" Jul 14 22:02:14.565003 kubelet[2535]: E0714 22:02:14.564884 2535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.565027 kubelet[2535]: E0714 22:02:14.564991 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wftww_kube-system(07dad9bf-62f6-44ab-88b7-926fd88e9c73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wftww_kube-system(07dad9bf-62f6-44ab-88b7-926fd88e9c73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wftww" podUID="07dad9bf-62f6-44ab-88b7-926fd88e9c73" Jul 14 22:02:14.565027 kubelet[2535]: E0714 22:02:14.565012 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687547dbff-nxmhz" Jul 14 22:02:14.565100 kubelet[2535]: E0714 22:02:14.565029 2535 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687547dbff-nxmhz" Jul 14 22:02:14.565100 kubelet[2535]: E0714 22:02:14.565060 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-687547dbff-nxmhz_calico-apiserver(b98ca949-7af6-44b4-b15a-f51c51b97182)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-687547dbff-nxmhz_calico-apiserver(b98ca949-7af6-44b4-b15a-f51c51b97182)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-687547dbff-nxmhz" podUID="b98ca949-7af6-44b4-b15a-f51c51b97182" Jul 14 22:02:14.566434 containerd[1440]: time="2025-07-14T22:02:14.566130978Z" level=error msg="Failed to destroy network for sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.567293 containerd[1440]: time="2025-07-14T22:02:14.567148926Z" level=error msg="encountered an error cleaning up failed sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.567293 containerd[1440]: time="2025-07-14T22:02:14.567205526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ss2ss,Uid:7c8f785c-fa60-472e-a6e1-a21274af8925,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.567375 kubelet[2535]: E0714 22:02:14.567351 2535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.567405 kubelet[2535]: E0714 22:02:14.567391 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ss2ss" Jul 14 22:02:14.567430 kubelet[2535]: E0714 22:02:14.567406 2535 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-ss2ss" Jul 14 22:02:14.567529 kubelet[2535]: E0714 22:02:14.567434 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-ss2ss_kube-system(7c8f785c-fa60-472e-a6e1-a21274af8925)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-ss2ss_kube-system(7c8f785c-fa60-472e-a6e1-a21274af8925)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ss2ss" podUID="7c8f785c-fa60-472e-a6e1-a21274af8925" Jul 14 22:02:14.568369 containerd[1440]: time="2025-07-14T22:02:14.568318753Z" level=error msg="Failed to destroy network for sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.569373 containerd[1440]: time="2025-07-14T22:02:14.569245823Z" level=error msg="encountered an error cleaning up failed sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.569373 containerd[1440]: time="2025-07-14T22:02:14.569305982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687547dbff-8vw7r,Uid:db3be38c-70ff-4df0-a2d5-d0462c499962,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.569685 kubelet[2535]: E0714 22:02:14.569660 2535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.569759 kubelet[2535]: E0714 22:02:14.569697 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687547dbff-8vw7r" Jul 14 22:02:14.569759 kubelet[2535]: E0714 22:02:14.569712 2535 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-687547dbff-8vw7r" Jul 14 22:02:14.569812 kubelet[2535]: E0714 22:02:14.569749 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-687547dbff-8vw7r_calico-apiserver(db3be38c-70ff-4df0-a2d5-d0462c499962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-687547dbff-8vw7r_calico-apiserver(db3be38c-70ff-4df0-a2d5-d0462c499962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-687547dbff-8vw7r" podUID="db3be38c-70ff-4df0-a2d5-d0462c499962" Jul 14 22:02:14.573783 containerd[1440]: time="2025-07-14T22:02:14.573728532Z" level=error msg="Failed to destroy network for sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.574222 containerd[1440]: time="2025-07-14T22:02:14.574147247Z" level=error msg="encountered an error cleaning up failed sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.574251 containerd[1440]: time="2025-07-14T22:02:14.574216046Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78b4f5fbc4-jkv4w,Uid:3f90d3d1-992d-4417-b4aa-7efb36d87df3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.574415 kubelet[2535]: E0714 22:02:14.574378 2535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.574507 kubelet[2535]: E0714 22:02:14.574424 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78b4f5fbc4-jkv4w" Jul 14 22:02:14.574768 containerd[1440]: time="2025-07-14T22:02:14.574735440Z" level=error msg="Failed to destroy network for sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.575087 containerd[1440]: time="2025-07-14T22:02:14.575046917Z" level=error msg="encountered an error cleaning up failed sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.575125 containerd[1440]: time="2025-07-14T22:02:14.575105436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-66djn,Uid:d2664461-d898-4ff2-850a-8e3d73709f9a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.575323 kubelet[2535]: E0714 22:02:14.575299 2535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.575536 kubelet[2535]: E0714 22:02:14.575395 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-66djn" Jul 14 22:02:14.575536 kubelet[2535]: E0714 22:02:14.575412 2535 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-66djn" Jul 14 22:02:14.575536 kubelet[2535]: E0714 22:02:14.575315 2535 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-78b4f5fbc4-jkv4w" Jul 14 22:02:14.575644 kubelet[2535]: E0714 22:02:14.575448 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-66djn_calico-system(d2664461-d898-4ff2-850a-8e3d73709f9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-66djn_calico-system(d2664461-d898-4ff2-850a-8e3d73709f9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-66djn" podUID="d2664461-d898-4ff2-850a-8e3d73709f9a" Jul 14 22:02:14.575644 kubelet[2535]: E0714 22:02:14.575544 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-78b4f5fbc4-jkv4w_calico-system(3f90d3d1-992d-4417-b4aa-7efb36d87df3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-78b4f5fbc4-jkv4w_calico-system(3f90d3d1-992d-4417-b4aa-7efb36d87df3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78b4f5fbc4-jkv4w" podUID="3f90d3d1-992d-4417-b4aa-7efb36d87df3" Jul 14 22:02:14.719916 kubelet[2535]: I0714 22:02:14.719361 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:14.720892 containerd[1440]: time="2025-07-14T22:02:14.720610509Z" level=info msg="StopPodSandbox for \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\"" Jul 14 22:02:14.720969 kubelet[2535]: I0714 22:02:14.720535 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:14.721498 containerd[1440]: time="2025-07-14T22:02:14.721471779Z" level=info msg="Ensure that sandbox 7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad in task-service has been cleanup successfully" Jul 14 22:02:14.722355 containerd[1440]: time="2025-07-14T22:02:14.722139571Z" level=info msg="StopPodSandbox for \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\"" Jul 14 22:02:14.722633 kubelet[2535]: I0714 22:02:14.722606 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:14.723127 containerd[1440]: time="2025-07-14T22:02:14.722877563Z" level=info msg="Ensure that sandbox c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6 in task-service has been cleanup successfully" Jul 14 22:02:14.723303 containerd[1440]: time="2025-07-14T22:02:14.723269239Z" level=info msg="StopPodSandbox for \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\"" Jul 14 22:02:14.723427 containerd[1440]: time="2025-07-14T22:02:14.723407237Z" level=info msg="Ensure that sandbox eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396 in task-service has been cleanup successfully" Jul 14 22:02:14.725084 kubelet[2535]: I0714 22:02:14.725042 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:14.726982 containerd[1440]: time="2025-07-14T22:02:14.726932437Z" level=info msg="StopPodSandbox for \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\"" Jul 14 22:02:14.727050 kubelet[2535]: I0714 22:02:14.727030 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:14.727174 containerd[1440]: time="2025-07-14T22:02:14.727148435Z" level=info msg="Ensure that sandbox 24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f in task-service has been cleanup successfully" Jul 14 22:02:14.727507 containerd[1440]: time="2025-07-14T22:02:14.727478711Z" level=info msg="StopPodSandbox for \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\"" Jul 14 22:02:14.727648 containerd[1440]: time="2025-07-14T22:02:14.727622909Z" level=info msg="Ensure that sandbox ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80 in task-service has been cleanup successfully" Jul 14 22:02:14.730253 kubelet[2535]: I0714 22:02:14.730224 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:14.730977 containerd[1440]: time="2025-07-14T22:02:14.730885632Z" level=info msg="StopPodSandbox for \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\"" Jul 14 22:02:14.732950 containerd[1440]: time="2025-07-14T22:02:14.732872250Z" level=info msg="Ensure that sandbox 8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad in task-service has been cleanup successfully" Jul 14 22:02:14.736682 containerd[1440]: time="2025-07-14T22:02:14.736642207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 22:02:14.742331 kubelet[2535]: I0714 22:02:14.742306 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:14.743203 containerd[1440]: time="2025-07-14T22:02:14.743164493Z" level=info msg="StopPodSandbox for \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\"" Jul 14 22:02:14.744261 containerd[1440]: time="2025-07-14T22:02:14.743324612Z" level=info msg="Ensure that sandbox 195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c in task-service has been cleanup successfully" Jul 14 22:02:14.761693 containerd[1440]: time="2025-07-14T22:02:14.761648644Z" level=error msg="StopPodSandbox for \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\" failed" error="failed to destroy network for sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.762062 kubelet[2535]: E0714 22:02:14.762022 2535 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:14.762214 kubelet[2535]: E0714 22:02:14.762167 2535 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396"} Jul 14 22:02:14.762289 kubelet[2535]: E0714 22:02:14.762276 2535 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c8f785c-fa60-472e-a6e1-a21274af8925\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:02:14.762390 kubelet[2535]: E0714 22:02:14.762369 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c8f785c-fa60-472e-a6e1-a21274af8925\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-ss2ss" podUID="7c8f785c-fa60-472e-a6e1-a21274af8925" Jul 14 22:02:14.777253 containerd[1440]: time="2025-07-14T22:02:14.777201628Z" level=error msg="StopPodSandbox for \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\" failed" error="failed to destroy network for sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.777512 kubelet[2535]: E0714 22:02:14.777443 2535 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:14.777584 kubelet[2535]: E0714 22:02:14.777527 2535 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c"} Jul 14 22:02:14.777584 kubelet[2535]: E0714 22:02:14.777570 2535 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d2664461-d898-4ff2-850a-8e3d73709f9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:02:14.777672 kubelet[2535]: E0714 22:02:14.777591 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d2664461-d898-4ff2-850a-8e3d73709f9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-66djn" podUID="d2664461-d898-4ff2-850a-8e3d73709f9a" Jul 14 22:02:14.779812 containerd[1440]: time="2025-07-14T22:02:14.779781319Z" level=error msg="StopPodSandbox for \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\" failed" error="failed to destroy network for sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.780052 kubelet[2535]: E0714 22:02:14.780026 2535 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:14.780147 kubelet[2535]: E0714 22:02:14.780130 2535 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80"} Jul 14 22:02:14.780227 kubelet[2535]: E0714 22:02:14.780214 2535 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07dad9bf-62f6-44ab-88b7-926fd88e9c73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:02:14.780320 kubelet[2535]: E0714 22:02:14.780301 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07dad9bf-62f6-44ab-88b7-926fd88e9c73\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wftww" podUID="07dad9bf-62f6-44ab-88b7-926fd88e9c73" Jul 14 22:02:14.782174 containerd[1440]: time="2025-07-14T22:02:14.782135892Z" level=error msg="StopPodSandbox for \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\" failed" error="failed to destroy network for sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.782339 kubelet[2535]: E0714 22:02:14.782310 2535 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:14.782377 kubelet[2535]: E0714 22:02:14.782338 2535 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6"} Jul 14 22:02:14.782404 kubelet[2535]: E0714 22:02:14.782374 2535 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69b2a205-08a9-48bb-b9c3-874e85d81984\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:02:14.782404 kubelet[2535]: E0714 22:02:14.782394 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69b2a205-08a9-48bb-b9c3-874e85d81984\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-568d8c6dc9-hlgkn" podUID="69b2a205-08a9-48bb-b9c3-874e85d81984" Jul 14 22:02:14.784816 containerd[1440]: time="2025-07-14T22:02:14.784763982Z" level=error msg="StopPodSandbox for \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\" failed" error="failed to destroy network for sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.785166 kubelet[2535]: E0714 22:02:14.785041 2535 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:14.785166 kubelet[2535]: E0714 22:02:14.785075 2535 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad"} Jul 14 22:02:14.785540 kubelet[2535]: E0714 22:02:14.785100 2535 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:02:14.785540 kubelet[2535]: E0714 22:02:14.785505 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-78b4f5fbc4-jkv4w" podUID="3f90d3d1-992d-4417-b4aa-7efb36d87df3" Jul 14 22:02:14.788685 containerd[1440]: time="2025-07-14T22:02:14.788609939Z" level=error msg="StopPodSandbox for \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\" failed" error="failed to destroy network for sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.788805 kubelet[2535]: E0714 22:02:14.788758 2535 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:14.788805 kubelet[2535]: E0714 22:02:14.788791 2535 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f"} Jul 14 22:02:14.788869 kubelet[2535]: E0714 22:02:14.788818 2535 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"db3be38c-70ff-4df0-a2d5-d0462c499962\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:02:14.788869 kubelet[2535]: E0714 22:02:14.788835 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"db3be38c-70ff-4df0-a2d5-d0462c499962\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-687547dbff-8vw7r" podUID="db3be38c-70ff-4df0-a2d5-d0462c499962" Jul 14 22:02:14.793941 containerd[1440]: time="2025-07-14T22:02:14.793871679Z" level=error msg="StopPodSandbox for \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\" failed" error="failed to destroy network for sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:14.794067 kubelet[2535]: E0714 22:02:14.794034 2535 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:14.794105 kubelet[2535]: E0714 22:02:14.794072 2535 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad"} Jul 14 22:02:14.794105 kubelet[2535]: E0714 22:02:14.794095 2535 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b98ca949-7af6-44b4-b15a-f51c51b97182\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:02:14.794164 kubelet[2535]: E0714 22:02:14.794116 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b98ca949-7af6-44b4-b15a-f51c51b97182\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-687547dbff-nxmhz" podUID="b98ca949-7af6-44b4-b15a-f51c51b97182" Jul 14 22:02:15.064703 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80-shm.mount: Deactivated successfully. Jul 14 22:02:15.064791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396-shm.mount: Deactivated successfully. Jul 14 22:02:15.612964 systemd[1]: Created slice kubepods-besteffort-pod88e059ee_2b3a_4b57_8789_ebeef41ce071.slice - libcontainer container kubepods-besteffort-pod88e059ee_2b3a_4b57_8789_ebeef41ce071.slice. Jul 14 22:02:15.615051 containerd[1440]: time="2025-07-14T22:02:15.614964165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbwq6,Uid:88e059ee-2b3a-4b57-8789-ebeef41ce071,Namespace:calico-system,Attempt:0,}" Jul 14 22:02:15.665105 containerd[1440]: time="2025-07-14T22:02:15.665053083Z" level=error msg="Failed to destroy network for sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:15.665369 containerd[1440]: time="2025-07-14T22:02:15.665336879Z" level=error msg="encountered an error cleaning up failed sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:15.665405 containerd[1440]: time="2025-07-14T22:02:15.665385879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbwq6,Uid:88e059ee-2b3a-4b57-8789-ebeef41ce071,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:15.666537 kubelet[2535]: E0714 22:02:15.665621 2535 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:15.666537 kubelet[2535]: E0714 22:02:15.665675 2535 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbwq6" Jul 14 22:02:15.666537 kubelet[2535]: E0714 22:02:15.665693 2535 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qbwq6" Jul 14 22:02:15.666655 kubelet[2535]: E0714 22:02:15.665744 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qbwq6_calico-system(88e059ee-2b3a-4b57-8789-ebeef41ce071)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qbwq6_calico-system(88e059ee-2b3a-4b57-8789-ebeef41ce071)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qbwq6" podUID="88e059ee-2b3a-4b57-8789-ebeef41ce071" Jul 14 22:02:15.667623 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e-shm.mount: Deactivated successfully. Jul 14 22:02:15.745322 kubelet[2535]: I0714 22:02:15.745290 2535 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:15.746233 containerd[1440]: time="2025-07-14T22:02:15.745972015Z" level=info msg="StopPodSandbox for \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\"" Jul 14 22:02:15.746233 containerd[1440]: time="2025-07-14T22:02:15.746148933Z" level=info msg="Ensure that sandbox 43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e in task-service has been cleanup successfully" Jul 14 22:02:15.765963 containerd[1440]: time="2025-07-14T22:02:15.765916031Z" level=error msg="StopPodSandbox for \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\" failed" error="failed to destroy network for sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:02:15.766354 kubelet[2535]: E0714 22:02:15.766118 2535 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:15.766354 kubelet[2535]: E0714 22:02:15.766184 2535 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e"} Jul 14 22:02:15.766354 kubelet[2535]: E0714 22:02:15.766215 2535 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88e059ee-2b3a-4b57-8789-ebeef41ce071\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:02:15.766354 kubelet[2535]: E0714 22:02:15.766237 2535 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88e059ee-2b3a-4b57-8789-ebeef41ce071\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qbwq6" podUID="88e059ee-2b3a-4b57-8789-ebeef41ce071" Jul 14 22:02:19.188012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount331958778.mount: Deactivated successfully. Jul 14 22:02:19.464361 containerd[1440]: time="2025-07-14T22:02:19.464233608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:19.464817 containerd[1440]: time="2025-07-14T22:02:19.464791900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 14 22:02:19.465628 containerd[1440]: time="2025-07-14T22:02:19.465593758Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:19.467402 containerd[1440]: time="2025-07-14T22:02:19.467368156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:19.468077 containerd[1440]: time="2025-07-14T22:02:19.468046051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.731360204s" Jul 14 22:02:19.468120 containerd[1440]: time="2025-07-14T22:02:19.468079012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 14 22:02:19.476711 containerd[1440]: time="2025-07-14T22:02:19.476672400Z" level=info msg="CreateContainer within sandbox \"3671fcb0dc2e969e8156f6e67dfc5015d2694805e826cbce7d4031b0010c7465\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 22:02:19.495526 containerd[1440]: time="2025-07-14T22:02:19.495486131Z" level=info msg="CreateContainer within sandbox \"3671fcb0dc2e969e8156f6e67dfc5015d2694805e826cbce7d4031b0010c7465\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1223a1b6330d62d09d7c5b0f4565007c374e0f19bab557528a22bc45052d8349\"" Jul 14 22:02:19.497067 containerd[1440]: time="2025-07-14T22:02:19.495927941Z" level=info msg="StartContainer for \"1223a1b6330d62d09d7c5b0f4565007c374e0f19bab557528a22bc45052d8349\"" Jul 14 22:02:19.550271 systemd[1]: Started cri-containerd-1223a1b6330d62d09d7c5b0f4565007c374e0f19bab557528a22bc45052d8349.scope - libcontainer container 1223a1b6330d62d09d7c5b0f4565007c374e0f19bab557528a22bc45052d8349. Jul 14 22:02:19.573187 containerd[1440]: time="2025-07-14T22:02:19.573149909Z" level=info msg="StartContainer for \"1223a1b6330d62d09d7c5b0f4565007c374e0f19bab557528a22bc45052d8349\" returns successfully" Jul 14 22:02:19.790706 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 22:02:19.790828 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 22:02:19.879795 kubelet[2535]: I0714 22:02:19.879725 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mk6p8" podStartSLOduration=1.291933136 podStartE2EDuration="12.879705251s" podCreationTimestamp="2025-07-14 22:02:07 +0000 UTC" firstStartedPulling="2025-07-14 22:02:07.880969831 +0000 UTC m=+25.358956515" lastFinishedPulling="2025-07-14 22:02:19.468741906 +0000 UTC m=+36.946728630" observedRunningTime="2025-07-14 22:02:19.771414924 +0000 UTC m=+37.249401648" watchObservedRunningTime="2025-07-14 22:02:19.879705251 +0000 UTC m=+37.357691975" Jul 14 22:02:19.888396 containerd[1440]: time="2025-07-14T22:02:19.888263038Z" level=info msg="StopPodSandbox for \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\"" Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:19.987 [INFO][3820] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:19.988 [INFO][3820] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" iface="eth0" netns="/var/run/netns/cni-802d4cbb-4212-8a6a-8597-91ad696274a1" Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:19.990 [INFO][3820] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" iface="eth0" netns="/var/run/netns/cni-802d4cbb-4212-8a6a-8597-91ad696274a1" Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:19.991 [INFO][3820] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" iface="eth0" netns="/var/run/netns/cni-802d4cbb-4212-8a6a-8597-91ad696274a1" Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:19.991 [INFO][3820] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:19.992 [INFO][3820] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:20.079 [INFO][3832] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" HandleID="k8s-pod-network.7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Workload="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:20.079 [INFO][3832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:20.079 [INFO][3832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:20.088 [WARNING][3832] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" HandleID="k8s-pod-network.7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Workload="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:20.088 [INFO][3832] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" HandleID="k8s-pod-network.7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Workload="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:20.089 [INFO][3832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:20.093331 containerd[1440]: 2025-07-14 22:02:20.091 [INFO][3820] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:20.093963 containerd[1440]: time="2025-07-14T22:02:20.093400483Z" level=info msg="TearDown network for sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\" successfully" Jul 14 22:02:20.093963 containerd[1440]: time="2025-07-14T22:02:20.093425763Z" level=info msg="StopPodSandbox for \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\" returns successfully" Jul 14 22:02:20.189199 systemd[1]: run-netns-cni\x2d802d4cbb\x2d4212\x2d8a6a\x2d8597\x2d91ad696274a1.mount: Deactivated successfully. Jul 14 22:02:20.256250 kubelet[2535]: I0714 22:02:20.256209 2535 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlbzd\" (UniqueName: \"kubernetes.io/projected/3f90d3d1-992d-4417-b4aa-7efb36d87df3-kube-api-access-rlbzd\") pod \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\" (UID: \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\") " Jul 14 22:02:20.256250 kubelet[2535]: I0714 22:02:20.256250 2535 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f90d3d1-992d-4417-b4aa-7efb36d87df3-whisker-ca-bundle\") pod \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\" (UID: \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\") " Jul 14 22:02:20.256418 kubelet[2535]: I0714 22:02:20.256270 2535 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f90d3d1-992d-4417-b4aa-7efb36d87df3-whisker-backend-key-pair\") pod \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\" (UID: \"3f90d3d1-992d-4417-b4aa-7efb36d87df3\") " Jul 14 22:02:20.257815 kubelet[2535]: I0714 22:02:20.257737 2535 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f90d3d1-992d-4417-b4aa-7efb36d87df3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3f90d3d1-992d-4417-b4aa-7efb36d87df3" (UID: "3f90d3d1-992d-4417-b4aa-7efb36d87df3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:02:20.261407 systemd[1]: var-lib-kubelet-pods-3f90d3d1\x2d992d\x2d4417\x2db4aa\x2d7efb36d87df3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drlbzd.mount: Deactivated successfully. Jul 14 22:02:20.261523 systemd[1]: var-lib-kubelet-pods-3f90d3d1\x2d992d\x2d4417\x2db4aa\x2d7efb36d87df3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 22:02:20.261637 kubelet[2535]: I0714 22:02:20.261578 2535 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f90d3d1-992d-4417-b4aa-7efb36d87df3-kube-api-access-rlbzd" (OuterVolumeSpecName: "kube-api-access-rlbzd") pod "3f90d3d1-992d-4417-b4aa-7efb36d87df3" (UID: "3f90d3d1-992d-4417-b4aa-7efb36d87df3"). InnerVolumeSpecName "kube-api-access-rlbzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:02:20.262194 kubelet[2535]: I0714 22:02:20.262068 2535 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f90d3d1-992d-4417-b4aa-7efb36d87df3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3f90d3d1-992d-4417-b4aa-7efb36d87df3" (UID: "3f90d3d1-992d-4417-b4aa-7efb36d87df3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:02:20.357576 kubelet[2535]: I0714 22:02:20.357438 2535 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlbzd\" (UniqueName: \"kubernetes.io/projected/3f90d3d1-992d-4417-b4aa-7efb36d87df3-kube-api-access-rlbzd\") on node \"localhost\" DevicePath \"\"" Jul 14 22:02:20.357576 kubelet[2535]: I0714 22:02:20.357495 2535 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f90d3d1-992d-4417-b4aa-7efb36d87df3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 22:02:20.357576 kubelet[2535]: I0714 22:02:20.357508 2535 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3f90d3d1-992d-4417-b4aa-7efb36d87df3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 22:02:20.615197 systemd[1]: Removed slice kubepods-besteffort-pod3f90d3d1_992d_4417_b4aa_7efb36d87df3.slice - libcontainer container kubepods-besteffort-pod3f90d3d1_992d_4417_b4aa_7efb36d87df3.slice. Jul 14 22:02:20.757851 kubelet[2535]: I0714 22:02:20.757806 2535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:02:20.827081 systemd[1]: Created slice kubepods-besteffort-podf7c888a8_b35c_4159_a667_7412fa5bebd0.slice - libcontainer container kubepods-besteffort-podf7c888a8_b35c_4159_a667_7412fa5bebd0.slice. Jul 14 22:02:20.960877 kubelet[2535]: I0714 22:02:20.960763 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f7c888a8-b35c-4159-a667-7412fa5bebd0-whisker-backend-key-pair\") pod \"whisker-55f5fd8b54-q5l2s\" (UID: \"f7c888a8-b35c-4159-a667-7412fa5bebd0\") " pod="calico-system/whisker-55f5fd8b54-q5l2s" Jul 14 22:02:20.961369 kubelet[2535]: I0714 22:02:20.961274 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7c888a8-b35c-4159-a667-7412fa5bebd0-whisker-ca-bundle\") pod \"whisker-55f5fd8b54-q5l2s\" (UID: \"f7c888a8-b35c-4159-a667-7412fa5bebd0\") " pod="calico-system/whisker-55f5fd8b54-q5l2s" Jul 14 22:02:20.961369 kubelet[2535]: I0714 22:02:20.961324 2535 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtc7\" (UniqueName: \"kubernetes.io/projected/f7c888a8-b35c-4159-a667-7412fa5bebd0-kube-api-access-kmtc7\") pod \"whisker-55f5fd8b54-q5l2s\" (UID: \"f7c888a8-b35c-4159-a667-7412fa5bebd0\") " pod="calico-system/whisker-55f5fd8b54-q5l2s" Jul 14 22:02:21.131769 containerd[1440]: time="2025-07-14T22:02:21.131352080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55f5fd8b54-q5l2s,Uid:f7c888a8-b35c-4159-a667-7412fa5bebd0,Namespace:calico-system,Attempt:0,}" Jul 14 22:02:21.313251 systemd-networkd[1380]: califa1277c0f09: Link UP Jul 14 22:02:21.313484 systemd-networkd[1380]: califa1277c0f09: Gained carrier Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.213 [INFO][3952] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.231 [INFO][3952] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0 whisker-55f5fd8b54- calico-system f7c888a8-b35c-4159-a667-7412fa5bebd0 896 0 2025-07-14 22:02:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55f5fd8b54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-55f5fd8b54-q5l2s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califa1277c0f09 [] [] }} ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Namespace="calico-system" Pod="whisker-55f5fd8b54-q5l2s" WorkloadEndpoint="localhost-k8s-whisker--55f5fd8b54--q5l2s-" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.231 [INFO][3952] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Namespace="calico-system" Pod="whisker-55f5fd8b54-q5l2s" WorkloadEndpoint="localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.259 [INFO][3971] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" HandleID="k8s-pod-network.67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Workload="localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.259 [INFO][3971] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" HandleID="k8s-pod-network.67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Workload="localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-55f5fd8b54-q5l2s", "timestamp":"2025-07-14 22:02:21.259180895 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.259 [INFO][3971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.259 [INFO][3971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.259 [INFO][3971] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.272 [INFO][3971] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" host="localhost" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.283 [INFO][3971] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.287 [INFO][3971] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.289 [INFO][3971] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.292 [INFO][3971] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.292 [INFO][3971] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" host="localhost" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.293 [INFO][3971] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67 Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.297 [INFO][3971] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" host="localhost" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.303 [INFO][3971] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" host="localhost" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.303 [INFO][3971] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" host="localhost" Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.303 [INFO][3971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:21.329632 containerd[1440]: 2025-07-14 22:02:21.303 [INFO][3971] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" HandleID="k8s-pod-network.67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Workload="localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0" Jul 14 22:02:21.330171 containerd[1440]: 2025-07-14 22:02:21.305 [INFO][3952] cni-plugin/k8s.go 418: Populated endpoint ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Namespace="calico-system" Pod="whisker-55f5fd8b54-q5l2s" WorkloadEndpoint="localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0", GenerateName:"whisker-55f5fd8b54-", Namespace:"calico-system", SelfLink:"", UID:"f7c888a8-b35c-4159-a667-7412fa5bebd0", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55f5fd8b54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-55f5fd8b54-q5l2s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califa1277c0f09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:21.330171 containerd[1440]: 2025-07-14 22:02:21.305 [INFO][3952] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Namespace="calico-system" Pod="whisker-55f5fd8b54-q5l2s" WorkloadEndpoint="localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0" Jul 14 22:02:21.330171 containerd[1440]: 2025-07-14 22:02:21.305 [INFO][3952] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa1277c0f09 ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Namespace="calico-system" Pod="whisker-55f5fd8b54-q5l2s" WorkloadEndpoint="localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0" Jul 14 22:02:21.330171 containerd[1440]: 2025-07-14 22:02:21.315 [INFO][3952] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Namespace="calico-system" Pod="whisker-55f5fd8b54-q5l2s" WorkloadEndpoint="localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0" Jul 14 22:02:21.330171 containerd[1440]: 2025-07-14 22:02:21.316 [INFO][3952] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Namespace="calico-system" Pod="whisker-55f5fd8b54-q5l2s" WorkloadEndpoint="localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0", GenerateName:"whisker-55f5fd8b54-", Namespace:"calico-system", SelfLink:"", UID:"f7c888a8-b35c-4159-a667-7412fa5bebd0", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55f5fd8b54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67", Pod:"whisker-55f5fd8b54-q5l2s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califa1277c0f09", MAC:"6e:bd:01:7c:1b:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:21.330171 containerd[1440]: 2025-07-14 22:02:21.325 [INFO][3952] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67" Namespace="calico-system" Pod="whisker-55f5fd8b54-q5l2s" WorkloadEndpoint="localhost-k8s-whisker--55f5fd8b54--q5l2s-eth0" Jul 14 22:02:21.346256 containerd[1440]: time="2025-07-14T22:02:21.345971803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:02:21.346256 containerd[1440]: time="2025-07-14T22:02:21.346027004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:02:21.346256 containerd[1440]: time="2025-07-14T22:02:21.346049324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:21.346256 containerd[1440]: time="2025-07-14T22:02:21.346143406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:21.361618 systemd[1]: Started cri-containerd-67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67.scope - libcontainer container 67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67. Jul 14 22:02:21.371419 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:02:21.387161 containerd[1440]: time="2025-07-14T22:02:21.387114632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55f5fd8b54-q5l2s,Uid:f7c888a8-b35c-4159-a667-7412fa5bebd0,Namespace:calico-system,Attempt:0,} returns sandbox id \"67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67\"" Jul 14 22:02:21.389415 containerd[1440]: time="2025-07-14T22:02:21.389371397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 22:02:22.610993 kubelet[2535]: I0714 22:02:22.610945 2535 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f90d3d1-992d-4417-b4aa-7efb36d87df3" path="/var/lib/kubelet/pods/3f90d3d1-992d-4417-b4aa-7efb36d87df3/volumes" Jul 14 22:02:22.701305 containerd[1440]: time="2025-07-14T22:02:22.701263368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:22.702497 containerd[1440]: time="2025-07-14T22:02:22.702446191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 14 22:02:22.703480 containerd[1440]: time="2025-07-14T22:02:22.703432170Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:22.705587 containerd[1440]: time="2025-07-14T22:02:22.705561011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:22.706455 containerd[1440]: time="2025-07-14T22:02:22.706423388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.31700707s" Jul 14 22:02:22.706494 containerd[1440]: time="2025-07-14T22:02:22.706474629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 14 22:02:22.709222 containerd[1440]: time="2025-07-14T22:02:22.709188721Z" level=info msg="CreateContainer within sandbox \"67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 22:02:22.721650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910684297.mount: Deactivated successfully. Jul 14 22:02:22.722660 containerd[1440]: time="2025-07-14T22:02:22.722512859Z" level=info msg="CreateContainer within sandbox \"67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9bfd6bb228631cfe0e0766a5c2f92f0cf58c1f3537455e7178164d6cf90c48a1\"" Jul 14 22:02:22.723037 containerd[1440]: time="2025-07-14T22:02:22.723006428Z" level=info msg="StartContainer for \"9bfd6bb228631cfe0e0766a5c2f92f0cf58c1f3537455e7178164d6cf90c48a1\"" Jul 14 22:02:22.742704 systemd[1]: run-containerd-runc-k8s.io-9bfd6bb228631cfe0e0766a5c2f92f0cf58c1f3537455e7178164d6cf90c48a1-runc.Amnsit.mount: Deactivated successfully. Jul 14 22:02:22.751620 systemd[1]: Started cri-containerd-9bfd6bb228631cfe0e0766a5c2f92f0cf58c1f3537455e7178164d6cf90c48a1.scope - libcontainer container 9bfd6bb228631cfe0e0766a5c2f92f0cf58c1f3537455e7178164d6cf90c48a1. Jul 14 22:02:22.779949 containerd[1440]: time="2025-07-14T22:02:22.779898647Z" level=info msg="StartContainer for \"9bfd6bb228631cfe0e0766a5c2f92f0cf58c1f3537455e7178164d6cf90c48a1\" returns successfully" Jul 14 22:02:22.785242 containerd[1440]: time="2025-07-14T22:02:22.785175589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 22:02:22.875693 systemd-networkd[1380]: califa1277c0f09: Gained IPv6LL Jul 14 22:02:24.405271 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount265019378.mount: Deactivated successfully. Jul 14 22:02:24.443620 containerd[1440]: time="2025-07-14T22:02:24.443579189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:24.444423 containerd[1440]: time="2025-07-14T22:02:24.444396563Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 14 22:02:24.445078 containerd[1440]: time="2025-07-14T22:02:24.445043535Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:24.451240 containerd[1440]: time="2025-07-14T22:02:24.450321388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:24.451240 containerd[1440]: time="2025-07-14T22:02:24.451112922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.665895372s" Jul 14 22:02:24.451240 containerd[1440]: time="2025-07-14T22:02:24.451145643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 14 22:02:24.453595 containerd[1440]: time="2025-07-14T22:02:24.453569766Z" level=info msg="CreateContainer within sandbox \"67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 22:02:24.464777 containerd[1440]: time="2025-07-14T22:02:24.464712804Z" level=info msg="CreateContainer within sandbox \"67377a6182af9e536c8757b7ae423d875468564a49aca3e95012d1908d67ba67\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"ca1d753771ebebaf6da39597deebd78b9599f434100eacfa8b2108ebe2f5f829\"" Jul 14 22:02:24.465366 containerd[1440]: time="2025-07-14T22:02:24.465293774Z" level=info msg="StartContainer for \"ca1d753771ebebaf6da39597deebd78b9599f434100eacfa8b2108ebe2f5f829\"" Jul 14 22:02:24.526672 systemd[1]: Started cri-containerd-ca1d753771ebebaf6da39597deebd78b9599f434100eacfa8b2108ebe2f5f829.scope - libcontainer container ca1d753771ebebaf6da39597deebd78b9599f434100eacfa8b2108ebe2f5f829. Jul 14 22:02:24.553657 containerd[1440]: time="2025-07-14T22:02:24.553618143Z" level=info msg="StartContainer for \"ca1d753771ebebaf6da39597deebd78b9599f434100eacfa8b2108ebe2f5f829\" returns successfully" Jul 14 22:02:24.805679 kubelet[2535]: I0714 22:02:24.805603 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55f5fd8b54-q5l2s" podStartSLOduration=1.741681651 podStartE2EDuration="4.805588177s" podCreationTimestamp="2025-07-14 22:02:20 +0000 UTC" firstStartedPulling="2025-07-14 22:02:21.388394977 +0000 UTC m=+38.866381741" lastFinishedPulling="2025-07-14 22:02:24.452301543 +0000 UTC m=+41.930288267" observedRunningTime="2025-07-14 22:02:24.805356613 +0000 UTC m=+42.283343337" watchObservedRunningTime="2025-07-14 22:02:24.805588177 +0000 UTC m=+42.283574901" Jul 14 22:02:25.609648 containerd[1440]: time="2025-07-14T22:02:25.609333718Z" level=info msg="StopPodSandbox for \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\"" Jul 14 22:02:25.610123 containerd[1440]: time="2025-07-14T22:02:25.609344558Z" level=info msg="StopPodSandbox for \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\"" Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.662 [INFO][4238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.662 [INFO][4238] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" iface="eth0" netns="/var/run/netns/cni-ead97455-f66a-74c7-7b15-5de0c82c3302" Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.662 [INFO][4238] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" iface="eth0" netns="/var/run/netns/cni-ead97455-f66a-74c7-7b15-5de0c82c3302" Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.662 [INFO][4238] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" iface="eth0" netns="/var/run/netns/cni-ead97455-f66a-74c7-7b15-5de0c82c3302" Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.662 [INFO][4238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.662 [INFO][4238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.686 [INFO][4255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" HandleID="k8s-pod-network.24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.686 [INFO][4255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.686 [INFO][4255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.695 [WARNING][4255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" HandleID="k8s-pod-network.24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.695 [INFO][4255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" HandleID="k8s-pod-network.24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.696 [INFO][4255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:25.699904 containerd[1440]: 2025-07-14 22:02:25.698 [INFO][4238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:25.703548 containerd[1440]: time="2025-07-14T22:02:25.700044781Z" level=info msg="TearDown network for sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\" successfully" Jul 14 22:02:25.703548 containerd[1440]: time="2025-07-14T22:02:25.700071381Z" level=info msg="StopPodSandbox for \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\" returns successfully" Jul 14 22:02:25.703548 containerd[1440]: time="2025-07-14T22:02:25.701864412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687547dbff-8vw7r,Uid:db3be38c-70ff-4df0-a2d5-d0462c499962,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:02:25.702583 systemd[1]: run-netns-cni\x2dead97455\x2df66a\x2d74c7\x2d7b15\x2d5de0c82c3302.mount: Deactivated successfully. Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.665 [INFO][4239] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.665 [INFO][4239] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" iface="eth0" netns="/var/run/netns/cni-98425918-8ebf-1aad-23fc-05c89f2318b4" Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.666 [INFO][4239] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" iface="eth0" netns="/var/run/netns/cni-98425918-8ebf-1aad-23fc-05c89f2318b4" Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.666 [INFO][4239] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" iface="eth0" netns="/var/run/netns/cni-98425918-8ebf-1aad-23fc-05c89f2318b4" Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.666 [INFO][4239] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.666 [INFO][4239] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.686 [INFO][4261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" HandleID="k8s-pod-network.8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.686 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.696 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.704 [WARNING][4261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" HandleID="k8s-pod-network.8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.704 [INFO][4261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" HandleID="k8s-pod-network.8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.706 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:25.709560 containerd[1440]: 2025-07-14 22:02:25.707 [INFO][4239] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:25.710147 containerd[1440]: time="2025-07-14T22:02:25.710046271Z" level=info msg="TearDown network for sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\" successfully" Jul 14 22:02:25.710147 containerd[1440]: time="2025-07-14T22:02:25.710070231Z" level=info msg="StopPodSandbox for \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\" returns successfully" Jul 14 22:02:25.710646 containerd[1440]: time="2025-07-14T22:02:25.710617961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687547dbff-nxmhz,Uid:b98ca949-7af6-44b4-b15a-f51c51b97182,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:02:25.712047 systemd[1]: run-netns-cni\x2d98425918\x2d8ebf\x2d1aad\x2d23fc\x2d05c89f2318b4.mount: Deactivated successfully. Jul 14 22:02:25.852034 systemd-networkd[1380]: calid8902523bf7: Link UP Jul 14 22:02:25.852244 systemd-networkd[1380]: calid8902523bf7: Gained carrier Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.773 [INFO][4279] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.786 [INFO][4279] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0 calico-apiserver-687547dbff- calico-apiserver b98ca949-7af6-44b4-b15a-f51c51b97182 926 0 2025-07-14 22:02:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:687547dbff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-687547dbff-nxmhz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid8902523bf7 [] [] }} ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-nxmhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--nxmhz-" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.787 [INFO][4279] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-nxmhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.811 [INFO][4302] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" HandleID="k8s-pod-network.1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.811 [INFO][4302] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" HandleID="k8s-pod-network.1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-687547dbff-nxmhz", "timestamp":"2025-07-14 22:02:25.811781602 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.812 [INFO][4302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.812 [INFO][4302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.812 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.820 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" host="localhost" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.825 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.829 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.830 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.832 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.832 [INFO][4302] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" host="localhost" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.835 [INFO][4302] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.839 [INFO][4302] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" host="localhost" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.844 [INFO][4302] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" host="localhost" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.844 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" host="localhost" Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.844 [INFO][4302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:25.865345 containerd[1440]: 2025-07-14 22:02:25.844 [INFO][4302] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" HandleID="k8s-pod-network.1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:25.866871 containerd[1440]: 2025-07-14 22:02:25.847 [INFO][4279] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-nxmhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0", GenerateName:"calico-apiserver-687547dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"b98ca949-7af6-44b4-b15a-f51c51b97182", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687547dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-687547dbff-nxmhz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8902523bf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:25.866871 containerd[1440]: 2025-07-14 22:02:25.847 [INFO][4279] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-nxmhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:25.866871 containerd[1440]: 2025-07-14 22:02:25.847 [INFO][4279] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8902523bf7 ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-nxmhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:25.866871 containerd[1440]: 2025-07-14 22:02:25.853 [INFO][4279] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-nxmhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:25.866871 containerd[1440]: 2025-07-14 22:02:25.853 [INFO][4279] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-nxmhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0", GenerateName:"calico-apiserver-687547dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"b98ca949-7af6-44b4-b15a-f51c51b97182", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687547dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c", Pod:"calico-apiserver-687547dbff-nxmhz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8902523bf7", MAC:"2e:c2:b6:3f:67:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:25.866871 containerd[1440]: 2025-07-14 22:02:25.863 [INFO][4279] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-nxmhz" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:25.878420 containerd[1440]: time="2025-07-14T22:02:25.878301813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:02:25.878420 containerd[1440]: time="2025-07-14T22:02:25.878371895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:02:25.878420 containerd[1440]: time="2025-07-14T22:02:25.878388695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:25.878573 containerd[1440]: time="2025-07-14T22:02:25.878544978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:25.901621 systemd[1]: Started cri-containerd-1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c.scope - libcontainer container 1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c. Jul 14 22:02:25.910815 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:02:25.926644 containerd[1440]: time="2025-07-14T22:02:25.925614778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687547dbff-nxmhz,Uid:b98ca949-7af6-44b4-b15a-f51c51b97182,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c\"" Jul 14 22:02:25.927190 containerd[1440]: time="2025-07-14T22:02:25.927122364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:02:25.948040 systemd-networkd[1380]: cali54e4f361bab: Link UP Jul 14 22:02:25.948225 systemd-networkd[1380]: cali54e4f361bab: Gained carrier Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.773 [INFO][4272] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.793 [INFO][4272] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0 calico-apiserver-687547dbff- calico-apiserver db3be38c-70ff-4df0-a2d5-d0462c499962 925 0 2025-07-14 22:02:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:687547dbff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-687547dbff-8vw7r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali54e4f361bab [] [] }} ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-8vw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--8vw7r-" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.793 [INFO][4272] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-8vw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.815 [INFO][4308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" HandleID="k8s-pod-network.d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.815 [INFO][4308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" HandleID="k8s-pod-network.d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd2e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-687547dbff-8vw7r", "timestamp":"2025-07-14 22:02:25.815739829 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.816 [INFO][4308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.844 [INFO][4308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.844 [INFO][4308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.921 [INFO][4308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" host="localhost" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.925 [INFO][4308] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.930 [INFO][4308] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.932 [INFO][4308] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.934 [INFO][4308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.934 [INFO][4308] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" host="localhost" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.935 [INFO][4308] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33 Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.939 [INFO][4308] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" host="localhost" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.943 [INFO][4308] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" host="localhost" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.943 [INFO][4308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" host="localhost" Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.943 [INFO][4308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:25.958441 containerd[1440]: 2025-07-14 22:02:25.943 [INFO][4308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" HandleID="k8s-pod-network.d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:25.958994 containerd[1440]: 2025-07-14 22:02:25.945 [INFO][4272] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-8vw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0", GenerateName:"calico-apiserver-687547dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"db3be38c-70ff-4df0-a2d5-d0462c499962", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687547dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-687547dbff-8vw7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54e4f361bab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:25.958994 containerd[1440]: 2025-07-14 22:02:25.946 [INFO][4272] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-8vw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:25.958994 containerd[1440]: 2025-07-14 22:02:25.946 [INFO][4272] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54e4f361bab ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-8vw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:25.958994 containerd[1440]: 2025-07-14 22:02:25.948 [INFO][4272] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-8vw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:25.958994 containerd[1440]: 2025-07-14 22:02:25.948 [INFO][4272] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-8vw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0", GenerateName:"calico-apiserver-687547dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"db3be38c-70ff-4df0-a2d5-d0462c499962", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687547dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33", Pod:"calico-apiserver-687547dbff-8vw7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54e4f361bab", MAC:"92:b3:35:18:7b:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:25.958994 containerd[1440]: 2025-07-14 22:02:25.956 [INFO][4272] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33" Namespace="calico-apiserver" Pod="calico-apiserver-687547dbff-8vw7r" WorkloadEndpoint="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:25.972661 containerd[1440]: time="2025-07-14T22:02:25.972560217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:02:25.972661 containerd[1440]: time="2025-07-14T22:02:25.972634338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:02:25.972808 containerd[1440]: time="2025-07-14T22:02:25.972662859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:25.973428 containerd[1440]: time="2025-07-14T22:02:25.973223508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:25.987827 systemd[1]: Started cri-containerd-d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33.scope - libcontainer container d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33. Jul 14 22:02:25.996982 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:02:26.021196 containerd[1440]: time="2025-07-14T22:02:26.021101069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-687547dbff-8vw7r,Uid:db3be38c-70ff-4df0-a2d5-d0462c499962,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33\"" Jul 14 22:02:27.163690 systemd-networkd[1380]: cali54e4f361bab: Gained IPv6LL Jul 14 22:02:27.610099 containerd[1440]: time="2025-07-14T22:02:27.609938967Z" level=info msg="StopPodSandbox for \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\"" Jul 14 22:02:27.610943 containerd[1440]: time="2025-07-14T22:02:27.610730579Z" level=info msg="StopPodSandbox for \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\"" Jul 14 22:02:27.611491 containerd[1440]: time="2025-07-14T22:02:27.611245107Z" level=info msg="StopPodSandbox for \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\"" Jul 14 22:02:27.612421 systemd-networkd[1380]: calid8902523bf7: Gained IPv6LL Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.709 [INFO][4506] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.709 [INFO][4506] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" iface="eth0" netns="/var/run/netns/cni-532f62bf-6d29-4cf5-e9d8-8d8b7d96242a" Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.710 [INFO][4506] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" iface="eth0" netns="/var/run/netns/cni-532f62bf-6d29-4cf5-e9d8-8d8b7d96242a" Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.712 [INFO][4506] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" iface="eth0" netns="/var/run/netns/cni-532f62bf-6d29-4cf5-e9d8-8d8b7d96242a" Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.712 [INFO][4506] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.712 [INFO][4506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.746 [INFO][4525] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" HandleID="k8s-pod-network.eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.746 [INFO][4525] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.746 [INFO][4525] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.756 [WARNING][4525] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" HandleID="k8s-pod-network.eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.756 [INFO][4525] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" HandleID="k8s-pod-network.eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.759 [INFO][4525] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:27.764282 containerd[1440]: 2025-07-14 22:02:27.762 [INFO][4506] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:27.766443 systemd[1]: run-netns-cni\x2d532f62bf\x2d6d29\x2d4cf5\x2de9d8\x2d8d8b7d96242a.mount: Deactivated successfully. Jul 14 22:02:27.767085 containerd[1440]: time="2025-07-14T22:02:27.766925775Z" level=info msg="TearDown network for sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\" successfully" Jul 14 22:02:27.767085 containerd[1440]: time="2025-07-14T22:02:27.766958895Z" level=info msg="StopPodSandbox for \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\" returns successfully" Jul 14 22:02:27.767360 kubelet[2535]: E0714 22:02:27.767324 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:27.769517 containerd[1440]: time="2025-07-14T22:02:27.769444294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ss2ss,Uid:7c8f785c-fa60-472e-a6e1-a21274af8925,Namespace:kube-system,Attempt:1,}" Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.728 [INFO][4493] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.730 [INFO][4493] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" iface="eth0" netns="/var/run/netns/cni-7ad00d6b-ea95-0d32-b8dc-d3d7933179ec" Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.730 [INFO][4493] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" iface="eth0" netns="/var/run/netns/cni-7ad00d6b-ea95-0d32-b8dc-d3d7933179ec" Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.731 [INFO][4493] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" iface="eth0" netns="/var/run/netns/cni-7ad00d6b-ea95-0d32-b8dc-d3d7933179ec" Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.731 [INFO][4493] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.731 [INFO][4493] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.756 [INFO][4537] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" HandleID="k8s-pod-network.195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.756 [INFO][4537] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.759 [INFO][4537] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.769 [WARNING][4537] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" HandleID="k8s-pod-network.195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.769 [INFO][4537] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" HandleID="k8s-pod-network.195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.771 [INFO][4537] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:27.777057 containerd[1440]: 2025-07-14 22:02:27.774 [INFO][4493] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:27.780625 containerd[1440]: time="2025-07-14T22:02:27.780420905Z" level=info msg="TearDown network for sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\" successfully" Jul 14 22:02:27.780625 containerd[1440]: time="2025-07-14T22:02:27.780491986Z" level=info msg="StopPodSandbox for \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\" returns successfully" Jul 14 22:02:27.781215 systemd[1]: run-netns-cni\x2d7ad00d6b\x2dea95\x2d0d32\x2db8dc\x2dd3d7933179ec.mount: Deactivated successfully. Jul 14 22:02:27.782106 containerd[1440]: time="2025-07-14T22:02:27.781770606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-66djn,Uid:d2664461-d898-4ff2-850a-8e3d73709f9a,Namespace:calico-system,Attempt:1,}" Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.712 [INFO][4483] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.712 [INFO][4483] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" iface="eth0" netns="/var/run/netns/cni-76ef2e73-6586-3659-3fe7-dec26bf54c4a" Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.712 [INFO][4483] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" iface="eth0" netns="/var/run/netns/cni-76ef2e73-6586-3659-3fe7-dec26bf54c4a" Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.712 [INFO][4483] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" iface="eth0" netns="/var/run/netns/cni-76ef2e73-6586-3659-3fe7-dec26bf54c4a" Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.712 [INFO][4483] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.712 [INFO][4483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.759 [INFO][4526] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" HandleID="k8s-pod-network.43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.759 [INFO][4526] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.771 [INFO][4526] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.783 [WARNING][4526] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" HandleID="k8s-pod-network.43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.783 [INFO][4526] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" HandleID="k8s-pod-network.43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.785 [INFO][4526] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:27.789043 containerd[1440]: 2025-07-14 22:02:27.787 [INFO][4483] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:27.789608 containerd[1440]: time="2025-07-14T22:02:27.789437166Z" level=info msg="TearDown network for sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\" successfully" Jul 14 22:02:27.789608 containerd[1440]: time="2025-07-14T22:02:27.789490006Z" level=info msg="StopPodSandbox for \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\" returns successfully" Jul 14 22:02:27.791089 systemd[1]: run-netns-cni\x2d76ef2e73\x2d6586\x2d3659\x2d3fe7\x2ddec26bf54c4a.mount: Deactivated successfully. Jul 14 22:02:27.791942 containerd[1440]: time="2025-07-14T22:02:27.791408196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbwq6,Uid:88e059ee-2b3a-4b57-8789-ebeef41ce071,Namespace:calico-system,Attempt:1,}" Jul 14 22:02:27.913390 containerd[1440]: time="2025-07-14T22:02:27.912563565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:27.913390 containerd[1440]: time="2025-07-14T22:02:27.913146414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 14 22:02:27.915354 containerd[1440]: time="2025-07-14T22:02:27.915294448Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:27.918837 containerd[1440]: time="2025-07-14T22:02:27.918787622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:27.922493 containerd[1440]: time="2025-07-14T22:02:27.922431799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.995277435s" Jul 14 22:02:27.922493 containerd[1440]: time="2025-07-14T22:02:27.922492840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 14 22:02:27.926660 containerd[1440]: time="2025-07-14T22:02:27.926621305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:02:27.927665 containerd[1440]: time="2025-07-14T22:02:27.927623920Z" level=info msg="CreateContainer within sandbox \"1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:02:27.954913 containerd[1440]: time="2025-07-14T22:02:27.954781424Z" level=info msg="CreateContainer within sandbox \"1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"174484e8f85c332549ded643fd72df010f2f02af374c43c51dc0273aaa192dda\"" Jul 14 22:02:27.956614 containerd[1440]: time="2025-07-14T22:02:27.956579572Z" level=info msg="StartContainer for \"174484e8f85c332549ded643fd72df010f2f02af374c43c51dc0273aaa192dda\"" Jul 14 22:02:28.001738 systemd[1]: Started cri-containerd-174484e8f85c332549ded643fd72df010f2f02af374c43c51dc0273aaa192dda.scope - libcontainer container 174484e8f85c332549ded643fd72df010f2f02af374c43c51dc0273aaa192dda. Jul 14 22:02:28.044838 systemd-networkd[1380]: calice22738fd7f: Link UP Jul 14 22:02:28.045536 systemd-networkd[1380]: calice22738fd7f: Gained carrier Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:27.926 [INFO][4555] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:27.942 [INFO][4555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0 coredns-7c65d6cfc9- kube-system 7c8f785c-fa60-472e-a6e1-a21274af8925 941 0 2025-07-14 22:01:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-ss2ss eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calice22738fd7f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ss2ss" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ss2ss-" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:27.942 [INFO][4555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ss2ss" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:27.985 [INFO][4599] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" HandleID="k8s-pod-network.8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:27.986 [INFO][4599] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" HandleID="k8s-pod-network.8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-ss2ss", "timestamp":"2025-07-14 22:02:27.985846948 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:27.986 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:27.986 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:27.986 [INFO][4599] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:27.997 [INFO][4599] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" host="localhost" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.006 [INFO][4599] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.013 [INFO][4599] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.021 [INFO][4599] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.023 [INFO][4599] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.024 [INFO][4599] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" host="localhost" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.025 [INFO][4599] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1 Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.029 [INFO][4599] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" host="localhost" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.036 [INFO][4599] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" host="localhost" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.036 [INFO][4599] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" host="localhost" Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.036 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:28.058473 containerd[1440]: 2025-07-14 22:02:28.036 [INFO][4599] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" HandleID="k8s-pod-network.8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:28.059065 containerd[1440]: 2025-07-14 22:02:28.042 [INFO][4555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ss2ss" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7c8f785c-fa60-472e-a6e1-a21274af8925", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-ss2ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice22738fd7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:28.059065 containerd[1440]: 2025-07-14 22:02:28.042 [INFO][4555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ss2ss" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:28.059065 containerd[1440]: 2025-07-14 22:02:28.042 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice22738fd7f ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ss2ss" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:28.059065 containerd[1440]: 2025-07-14 22:02:28.046 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ss2ss" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:28.059065 containerd[1440]: 2025-07-14 22:02:28.047 [INFO][4555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ss2ss" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7c8f785c-fa60-472e-a6e1-a21274af8925", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1", Pod:"coredns-7c65d6cfc9-ss2ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice22738fd7f", MAC:"56:b6:f5:96:61:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:28.059065 containerd[1440]: 2025-07-14 22:02:28.056 [INFO][4555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1" Namespace="kube-system" Pod="coredns-7c65d6cfc9-ss2ss" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:28.082333 containerd[1440]: time="2025-07-14T22:02:28.082236396Z" level=info msg="StartContainer for \"174484e8f85c332549ded643fd72df010f2f02af374c43c51dc0273aaa192dda\" returns successfully" Jul 14 22:02:28.104427 containerd[1440]: time="2025-07-14T22:02:28.104296765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:02:28.104427 containerd[1440]: time="2025-07-14T22:02:28.104357966Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:02:28.104427 containerd[1440]: time="2025-07-14T22:02:28.104372606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:28.106656 containerd[1440]: time="2025-07-14T22:02:28.104641090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:28.134091 systemd[1]: Started cri-containerd-8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1.scope - libcontainer container 8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1. Jul 14 22:02:28.137326 systemd-networkd[1380]: cali82628ede15e: Link UP Jul 14 22:02:28.139408 systemd-networkd[1380]: cali82628ede15e: Gained carrier Jul 14 22:02:28.150862 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:27.946 [INFO][4569] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:27.964 [INFO][4569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qbwq6-eth0 csi-node-driver- calico-system 88e059ee-2b3a-4b57-8789-ebeef41ce071 942 0 2025-07-14 22:02:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qbwq6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali82628ede15e [] [] }} ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Namespace="calico-system" Pod="csi-node-driver-qbwq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--qbwq6-" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:27.964 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Namespace="calico-system" Pod="csi-node-driver-qbwq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.009 [INFO][4615] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" HandleID="k8s-pod-network.d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.009 [INFO][4615] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" HandleID="k8s-pod-network.d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dd60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qbwq6", "timestamp":"2025-07-14 22:02:28.009066985 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.009 [INFO][4615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.036 [INFO][4615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.036 [INFO][4615] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.098 [INFO][4615] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" host="localhost" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.107 [INFO][4615] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.111 [INFO][4615] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.113 [INFO][4615] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.115 [INFO][4615] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.117 [INFO][4615] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" host="localhost" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.119 [INFO][4615] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.122 [INFO][4615] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" host="localhost" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.129 [INFO][4615] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" host="localhost" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.129 [INFO][4615] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" host="localhost" Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.129 [INFO][4615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:28.152441 containerd[1440]: 2025-07-14 22:02:28.129 [INFO][4615] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" HandleID="k8s-pod-network.d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:28.153000 containerd[1440]: 2025-07-14 22:02:28.135 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Namespace="calico-system" Pod="csi-node-driver-qbwq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--qbwq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qbwq6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88e059ee-2b3a-4b57-8789-ebeef41ce071", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qbwq6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali82628ede15e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:28.153000 containerd[1440]: 2025-07-14 22:02:28.135 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Namespace="calico-system" Pod="csi-node-driver-qbwq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:28.153000 containerd[1440]: 2025-07-14 22:02:28.135 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82628ede15e ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Namespace="calico-system" Pod="csi-node-driver-qbwq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:28.153000 containerd[1440]: 2025-07-14 22:02:28.138 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Namespace="calico-system" Pod="csi-node-driver-qbwq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:28.153000 containerd[1440]: 2025-07-14 22:02:28.138 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Namespace="calico-system" Pod="csi-node-driver-qbwq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--qbwq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qbwq6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88e059ee-2b3a-4b57-8789-ebeef41ce071", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c", Pod:"csi-node-driver-qbwq6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali82628ede15e", MAC:"62:a8:b9:19:e4:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:28.153000 containerd[1440]: 2025-07-14 22:02:28.150 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c" Namespace="calico-system" Pod="csi-node-driver-qbwq6" WorkloadEndpoint="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:28.169219 containerd[1440]: time="2025-07-14T22:02:28.168931329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:02:28.169219 containerd[1440]: time="2025-07-14T22:02:28.168986170Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:02:28.169219 containerd[1440]: time="2025-07-14T22:02:28.169002930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:28.169219 containerd[1440]: time="2025-07-14T22:02:28.169082731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:28.176848 containerd[1440]: time="2025-07-14T22:02:28.176802886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-ss2ss,Uid:7c8f785c-fa60-472e-a6e1-a21274af8925,Namespace:kube-system,Attempt:1,} returns sandbox id \"8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1\"" Jul 14 22:02:28.179139 kubelet[2535]: E0714 22:02:28.178979 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:28.182190 containerd[1440]: time="2025-07-14T22:02:28.182106285Z" level=info msg="CreateContainer within sandbox \"8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:02:28.191972 systemd[1]: Started cri-containerd-d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c.scope - libcontainer container d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c. Jul 14 22:02:28.204851 containerd[1440]: time="2025-07-14T22:02:28.204808824Z" level=info msg="CreateContainer within sandbox \"8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0578bb6026ebc84fb6481040132d0e8e0e57a9fa31ebfc52fce2158d88680923\"" Jul 14 22:02:28.206379 containerd[1440]: time="2025-07-14T22:02:28.206345367Z" level=info msg="StartContainer for \"0578bb6026ebc84fb6481040132d0e8e0e57a9fa31ebfc52fce2158d88680923\"" Jul 14 22:02:28.207594 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:02:28.220645 containerd[1440]: time="2025-07-14T22:02:28.220612900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qbwq6,Uid:88e059ee-2b3a-4b57-8789-ebeef41ce071,Namespace:calico-system,Attempt:1,} returns sandbox id \"d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c\"" Jul 14 22:02:28.239682 systemd-networkd[1380]: caliae04b4262bc: Link UP Jul 14 22:02:28.239911 systemd-networkd[1380]: caliae04b4262bc: Gained carrier Jul 14 22:02:28.242646 systemd[1]: Started cri-containerd-0578bb6026ebc84fb6481040132d0e8e0e57a9fa31ebfc52fce2158d88680923.scope - libcontainer container 0578bb6026ebc84fb6481040132d0e8e0e57a9fa31ebfc52fce2158d88680923. Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:27.953 [INFO][4573] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:27.970 [INFO][4573] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--66djn-eth0 goldmane-58fd7646b9- calico-system d2664461-d898-4ff2-850a-8e3d73709f9a 943 0 2025-07-14 22:02:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-66djn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliae04b4262bc [] [] }} ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Namespace="calico-system" Pod="goldmane-58fd7646b9-66djn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--66djn-" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:27.970 [INFO][4573] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Namespace="calico-system" Pod="goldmane-58fd7646b9-66djn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.011 [INFO][4622] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" HandleID="k8s-pod-network.576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.012 [INFO][4622] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" HandleID="k8s-pod-network.576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a0e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-66djn", "timestamp":"2025-07-14 22:02:28.011815906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.012 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.129 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.130 [INFO][4622] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.199 [INFO][4622] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" host="localhost" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.206 [INFO][4622] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.217 [INFO][4622] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.219 [INFO][4622] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.222 [INFO][4622] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.222 [INFO][4622] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" host="localhost" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.224 [INFO][4622] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.228 [INFO][4622] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" host="localhost" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.233 [INFO][4622] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" host="localhost" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.234 [INFO][4622] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" host="localhost" Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.234 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:28.256932 containerd[1440]: 2025-07-14 22:02:28.234 [INFO][4622] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" HandleID="k8s-pod-network.576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:28.257500 containerd[1440]: 2025-07-14 22:02:28.236 [INFO][4573] cni-plugin/k8s.go 418: Populated endpoint ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Namespace="calico-system" Pod="goldmane-58fd7646b9-66djn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--66djn-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d2664461-d898-4ff2-850a-8e3d73709f9a", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-66djn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliae04b4262bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:28.257500 containerd[1440]: 2025-07-14 22:02:28.236 [INFO][4573] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Namespace="calico-system" Pod="goldmane-58fd7646b9-66djn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:28.257500 containerd[1440]: 2025-07-14 22:02:28.236 [INFO][4573] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae04b4262bc ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Namespace="calico-system" Pod="goldmane-58fd7646b9-66djn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:28.257500 containerd[1440]: 2025-07-14 22:02:28.239 [INFO][4573] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Namespace="calico-system" Pod="goldmane-58fd7646b9-66djn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:28.257500 containerd[1440]: 2025-07-14 22:02:28.241 [INFO][4573] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Namespace="calico-system" Pod="goldmane-58fd7646b9-66djn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--66djn-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d2664461-d898-4ff2-850a-8e3d73709f9a", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a", Pod:"goldmane-58fd7646b9-66djn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliae04b4262bc", MAC:"5a:aa:d7:f1:0a:6c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:28.257500 containerd[1440]: 2025-07-14 22:02:28.252 [INFO][4573] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a" Namespace="calico-system" Pod="goldmane-58fd7646b9-66djn" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:28.261550 containerd[1440]: time="2025-07-14T22:02:28.260580616Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:28.264650 containerd[1440]: time="2025-07-14T22:02:28.264623796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 22:02:28.268188 containerd[1440]: time="2025-07-14T22:02:28.268154329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 341.493264ms" Jul 14 22:02:28.268250 containerd[1440]: time="2025-07-14T22:02:28.268193929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 14 22:02:28.270169 containerd[1440]: time="2025-07-14T22:02:28.269959196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 22:02:28.271566 containerd[1440]: time="2025-07-14T22:02:28.271508499Z" level=info msg="CreateContainer within sandbox \"d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:02:28.282593 containerd[1440]: time="2025-07-14T22:02:28.282055776Z" level=info msg="CreateContainer within sandbox \"d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"54a5fc2418bc92a70942d43fba5ab0159f1dcd809b41faf4a1bc8fb359136f1a\"" Jul 14 22:02:28.282593 containerd[1440]: time="2025-07-14T22:02:28.282071656Z" level=info msg="StartContainer for \"0578bb6026ebc84fb6481040132d0e8e0e57a9fa31ebfc52fce2158d88680923\" returns successfully" Jul 14 22:02:28.283272 containerd[1440]: time="2025-07-14T22:02:28.283232954Z" level=info msg="StartContainer for \"54a5fc2418bc92a70942d43fba5ab0159f1dcd809b41faf4a1bc8fb359136f1a\"" Jul 14 22:02:28.283617 containerd[1440]: time="2025-07-14T22:02:28.283447317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:02:28.283707 containerd[1440]: time="2025-07-14T22:02:28.283675640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:02:28.283778 containerd[1440]: time="2025-07-14T22:02:28.283750441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:28.284119 containerd[1440]: time="2025-07-14T22:02:28.284077446Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:28.306620 systemd[1]: Started cri-containerd-576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a.scope - libcontainer container 576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a. Jul 14 22:02:28.311573 systemd[1]: Started cri-containerd-54a5fc2418bc92a70942d43fba5ab0159f1dcd809b41faf4a1bc8fb359136f1a.scope - libcontainer container 54a5fc2418bc92a70942d43fba5ab0159f1dcd809b41faf4a1bc8fb359136f1a. Jul 14 22:02:28.332272 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:02:28.360379 containerd[1440]: time="2025-07-14T22:02:28.360280703Z" level=info msg="StartContainer for \"54a5fc2418bc92a70942d43fba5ab0159f1dcd809b41faf4a1bc8fb359136f1a\" returns successfully" Jul 14 22:02:28.367885 containerd[1440]: time="2025-07-14T22:02:28.367794935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-66djn,Uid:d2664461-d898-4ff2-850a-8e3d73709f9a,Namespace:calico-system,Attempt:1,} returns sandbox id \"576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a\"" Jul 14 22:02:28.613162 containerd[1440]: time="2025-07-14T22:02:28.613109833Z" level=info msg="StopPodSandbox for \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\"" Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.683 [INFO][4906] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.683 [INFO][4906] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" iface="eth0" netns="/var/run/netns/cni-1796dc07-d832-1393-5242-3ffc8a2554aa" Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.683 [INFO][4906] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" iface="eth0" netns="/var/run/netns/cni-1796dc07-d832-1393-5242-3ffc8a2554aa" Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.683 [INFO][4906] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" iface="eth0" netns="/var/run/netns/cni-1796dc07-d832-1393-5242-3ffc8a2554aa" Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.683 [INFO][4906] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.683 [INFO][4906] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.720 [INFO][4921] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" HandleID="k8s-pod-network.ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.720 [INFO][4921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.720 [INFO][4921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.730 [WARNING][4921] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" HandleID="k8s-pod-network.ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.730 [INFO][4921] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" HandleID="k8s-pod-network.ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.732 [INFO][4921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:28.737343 containerd[1440]: 2025-07-14 22:02:28.734 [INFO][4906] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:28.741496 containerd[1440]: time="2025-07-14T22:02:28.740318330Z" level=info msg="TearDown network for sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\" successfully" Jul 14 22:02:28.741496 containerd[1440]: time="2025-07-14T22:02:28.740356731Z" level=info msg="StopPodSandbox for \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\" returns successfully" Jul 14 22:02:28.741496 containerd[1440]: time="2025-07-14T22:02:28.741366146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wftww,Uid:07dad9bf-62f6-44ab-88b7-926fd88e9c73,Namespace:kube-system,Attempt:1,}" Jul 14 22:02:28.741675 kubelet[2535]: E0714 22:02:28.740771 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:28.776677 systemd[1]: run-netns-cni\x2d1796dc07\x2dd832\x2d1393\x2d5242\x2d3ffc8a2554aa.mount: Deactivated successfully. Jul 14 22:02:28.818164 kubelet[2535]: E0714 22:02:28.818103 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:28.828701 kubelet[2535]: I0714 22:02:28.828642 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-687547dbff-8vw7r" podStartSLOduration=24.581891674 podStartE2EDuration="26.828626367s" podCreationTimestamp="2025-07-14 22:02:02 +0000 UTC" firstStartedPulling="2025-07-14 22:02:26.022314449 +0000 UTC m=+43.500301173" lastFinishedPulling="2025-07-14 22:02:28.269049182 +0000 UTC m=+45.747035866" observedRunningTime="2025-07-14 22:02:28.828335323 +0000 UTC m=+46.306322047" watchObservedRunningTime="2025-07-14 22:02:28.828626367 +0000 UTC m=+46.306613091" Jul 14 22:02:28.856254 kubelet[2535]: I0714 22:02:28.856186 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-687547dbff-nxmhz" podStartSLOduration=24.858437505 podStartE2EDuration="26.856171818s" podCreationTimestamp="2025-07-14 22:02:02 +0000 UTC" firstStartedPulling="2025-07-14 22:02:25.92687552 +0000 UTC m=+43.404862244" lastFinishedPulling="2025-07-14 22:02:27.924609833 +0000 UTC m=+45.402596557" observedRunningTime="2025-07-14 22:02:28.843260385 +0000 UTC m=+46.321247109" watchObservedRunningTime="2025-07-14 22:02:28.856171818 +0000 UTC m=+46.334158542" Jul 14 22:02:28.904174 systemd-networkd[1380]: cali782d4fad475: Link UP Jul 14 22:02:28.904766 systemd-networkd[1380]: cali782d4fad475: Gained carrier Jul 14 22:02:28.919494 kubelet[2535]: I0714 22:02:28.919425 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-ss2ss" podStartSLOduration=40.919407521 podStartE2EDuration="40.919407521s" podCreationTimestamp="2025-07-14 22:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:02:28.857273914 +0000 UTC m=+46.335260638" watchObservedRunningTime="2025-07-14 22:02:28.919407521 +0000 UTC m=+46.397394245" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.780 [INFO][4940] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.798 [INFO][4940] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--wftww-eth0 coredns-7c65d6cfc9- kube-system 07dad9bf-62f6-44ab-88b7-926fd88e9c73 969 0 2025-07-14 22:01:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-wftww eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali782d4fad475 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wftww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wftww-" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.798 [INFO][4940] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wftww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.842 [INFO][4959] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" HandleID="k8s-pod-network.efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.842 [INFO][4959] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" HandleID="k8s-pod-network.efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-wftww", "timestamp":"2025-07-14 22:02:28.842144969 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.842 [INFO][4959] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.842 [INFO][4959] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.842 [INFO][4959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.857 [INFO][4959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" host="localhost" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.868 [INFO][4959] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.875 [INFO][4959] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.877 [INFO][4959] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.880 [INFO][4959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.880 [INFO][4959] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" host="localhost" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.881 [INFO][4959] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.887 [INFO][4959] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" host="localhost" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.896 [INFO][4959] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" host="localhost" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.896 [INFO][4959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" host="localhost" Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.897 [INFO][4959] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:28.921270 containerd[1440]: 2025-07-14 22:02:28.897 [INFO][4959] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" HandleID="k8s-pod-network.efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:28.921827 containerd[1440]: 2025-07-14 22:02:28.902 [INFO][4940] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wftww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wftww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"07dad9bf-62f6-44ab-88b7-926fd88e9c73", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-wftww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali782d4fad475", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:28.921827 containerd[1440]: 2025-07-14 22:02:28.902 [INFO][4940] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wftww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:28.921827 containerd[1440]: 2025-07-14 22:02:28.902 [INFO][4940] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali782d4fad475 ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wftww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:28.921827 containerd[1440]: 2025-07-14 22:02:28.904 [INFO][4940] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wftww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:28.921827 containerd[1440]: 2025-07-14 22:02:28.906 [INFO][4940] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wftww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wftww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"07dad9bf-62f6-44ab-88b7-926fd88e9c73", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea", Pod:"coredns-7c65d6cfc9-wftww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali782d4fad475", MAC:"7a:2b:29:17:db:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:28.921827 containerd[1440]: 2025-07-14 22:02:28.918 [INFO][4940] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wftww" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:28.951000 containerd[1440]: time="2025-07-14T22:02:28.950909751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:02:28.951000 containerd[1440]: time="2025-07-14T22:02:28.950973712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:02:28.951000 containerd[1440]: time="2025-07-14T22:02:28.950988912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:28.953527 containerd[1440]: time="2025-07-14T22:02:28.951074553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:28.983624 systemd[1]: Started cri-containerd-efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea.scope - libcontainer container efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea. Jul 14 22:02:28.995074 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:02:29.014312 containerd[1440]: time="2025-07-14T22:02:29.014268127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wftww,Uid:07dad9bf-62f6-44ab-88b7-926fd88e9c73,Namespace:kube-system,Attempt:1,} returns sandbox id \"efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea\"" Jul 14 22:02:29.014968 kubelet[2535]: E0714 22:02:29.014944 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:29.021131 containerd[1440]: time="2025-07-14T22:02:29.020860861Z" level=info msg="CreateContainer within sandbox \"efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:02:29.032211 containerd[1440]: time="2025-07-14T22:02:29.032150062Z" level=info msg="CreateContainer within sandbox \"efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3413040c2f0f3a3d18c018cf33b0180460dc7ebf8738fd1c94fc0f89ee8b60ed\"" Jul 14 22:02:29.032576 containerd[1440]: time="2025-07-14T22:02:29.032553708Z" level=info msg="StartContainer for \"3413040c2f0f3a3d18c018cf33b0180460dc7ebf8738fd1c94fc0f89ee8b60ed\"" Jul 14 22:02:29.057606 systemd[1]: Started cri-containerd-3413040c2f0f3a3d18c018cf33b0180460dc7ebf8738fd1c94fc0f89ee8b60ed.scope - libcontainer container 3413040c2f0f3a3d18c018cf33b0180460dc7ebf8738fd1c94fc0f89ee8b60ed. Jul 14 22:02:29.099837 containerd[1440]: time="2025-07-14T22:02:29.099681545Z" level=info msg="StartContainer for \"3413040c2f0f3a3d18c018cf33b0180460dc7ebf8738fd1c94fc0f89ee8b60ed\" returns successfully" Jul 14 22:02:29.213085 containerd[1440]: time="2025-07-14T22:02:29.212967000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:29.214194 containerd[1440]: time="2025-07-14T22:02:29.214157297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 14 22:02:29.216061 containerd[1440]: time="2025-07-14T22:02:29.214886228Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:29.217430 containerd[1440]: time="2025-07-14T22:02:29.217397703Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:29.218248 containerd[1440]: time="2025-07-14T22:02:29.218209515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 948.221679ms" Jul 14 22:02:29.218348 containerd[1440]: time="2025-07-14T22:02:29.218331437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 14 22:02:29.220418 containerd[1440]: time="2025-07-14T22:02:29.220314625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 22:02:29.221811 containerd[1440]: time="2025-07-14T22:02:29.221196238Z" level=info msg="CreateContainer within sandbox \"d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 22:02:29.243325 containerd[1440]: time="2025-07-14T22:02:29.243288032Z" level=info msg="CreateContainer within sandbox \"d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5174c61ac27d5a5d04f9a27aa840df60674b82951e6954d6c194d2bda2a4e323\"" Jul 14 22:02:29.245562 containerd[1440]: time="2025-07-14T22:02:29.244077004Z" level=info msg="StartContainer for \"5174c61ac27d5a5d04f9a27aa840df60674b82951e6954d6c194d2bda2a4e323\"" Jul 14 22:02:29.276599 systemd[1]: Started cri-containerd-5174c61ac27d5a5d04f9a27aa840df60674b82951e6954d6c194d2bda2a4e323.scope - libcontainer container 5174c61ac27d5a5d04f9a27aa840df60674b82951e6954d6c194d2bda2a4e323. Jul 14 22:02:29.315776 containerd[1440]: time="2025-07-14T22:02:29.315202858Z" level=info msg="StartContainer for \"5174c61ac27d5a5d04f9a27aa840df60674b82951e6954d6c194d2bda2a4e323\" returns successfully" Jul 14 22:02:29.531708 systemd-networkd[1380]: cali82628ede15e: Gained IPv6LL Jul 14 22:02:29.609985 containerd[1440]: time="2025-07-14T22:02:29.609910299Z" level=info msg="StopPodSandbox for \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\"" Jul 14 22:02:29.663751 systemd-networkd[1380]: caliae04b4262bc: Gained IPv6LL Jul 14 22:02:29.664480 systemd-networkd[1380]: calice22738fd7f: Gained IPv6LL Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.666 [INFO][5108] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.666 [INFO][5108] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" iface="eth0" netns="/var/run/netns/cni-266674a1-7d23-6f3b-9896-e82587508983" Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.666 [INFO][5108] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" iface="eth0" netns="/var/run/netns/cni-266674a1-7d23-6f3b-9896-e82587508983" Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.667 [INFO][5108] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" iface="eth0" netns="/var/run/netns/cni-266674a1-7d23-6f3b-9896-e82587508983" Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.667 [INFO][5108] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.667 [INFO][5108] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.690 [INFO][5116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" HandleID="k8s-pod-network.c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.690 [INFO][5116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.690 [INFO][5116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.699 [WARNING][5116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" HandleID="k8s-pod-network.c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.699 [INFO][5116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" HandleID="k8s-pod-network.c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.702 [INFO][5116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:29.706561 containerd[1440]: 2025-07-14 22:02:29.704 [INFO][5108] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:29.707176 containerd[1440]: time="2025-07-14T22:02:29.707118565Z" level=info msg="TearDown network for sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\" successfully" Jul 14 22:02:29.707176 containerd[1440]: time="2025-07-14T22:02:29.707153446Z" level=info msg="StopPodSandbox for \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\" returns successfully" Jul 14 22:02:29.707928 containerd[1440]: time="2025-07-14T22:02:29.707897816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-568d8c6dc9-hlgkn,Uid:69b2a205-08a9-48bb-b9c3-874e85d81984,Namespace:calico-system,Attempt:1,}" Jul 14 22:02:29.771911 systemd[1]: run-netns-cni\x2d266674a1\x2d7d23\x2d6f3b\x2d9896\x2de82587508983.mount: Deactivated successfully. Jul 14 22:02:29.831971 kubelet[2535]: I0714 22:02:29.831862 2535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:02:29.833609 kubelet[2535]: E0714 22:02:29.833467 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:29.834398 kubelet[2535]: E0714 22:02:29.834056 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:29.842821 systemd-networkd[1380]: calid7c5701acc8: Link UP Jul 14 22:02:29.843316 systemd-networkd[1380]: calid7c5701acc8: Gained carrier Jul 14 22:02:29.854724 kubelet[2535]: I0714 22:02:29.854662 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wftww" podStartSLOduration=41.854643148 podStartE2EDuration="41.854643148s" podCreationTimestamp="2025-07-14 22:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:02:29.853210928 +0000 UTC m=+47.331197652" watchObservedRunningTime="2025-07-14 22:02:29.854643148 +0000 UTC m=+47.332629872" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.746 [INFO][5124] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.759 [INFO][5124] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0 calico-kube-controllers-568d8c6dc9- calico-system 69b2a205-08a9-48bb-b9c3-874e85d81984 999 0 2025-07-14 22:02:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:568d8c6dc9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-568d8c6dc9-hlgkn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid7c5701acc8 [] [] }} ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Namespace="calico-system" Pod="calico-kube-controllers-568d8c6dc9-hlgkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.760 [INFO][5124] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Namespace="calico-system" Pod="calico-kube-controllers-568d8c6dc9-hlgkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.794 [INFO][5139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" HandleID="k8s-pod-network.c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.794 [INFO][5139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" HandleID="k8s-pod-network.c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a05d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-568d8c6dc9-hlgkn", "timestamp":"2025-07-14 22:02:29.794791615 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.795 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.795 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.795 [INFO][5139] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.809 [INFO][5139] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" host="localhost" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.813 [INFO][5139] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.819 [INFO][5139] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.820 [INFO][5139] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.822 [INFO][5139] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.822 [INFO][5139] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" host="localhost" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.824 [INFO][5139] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443 Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.828 [INFO][5139] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" host="localhost" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.836 [INFO][5139] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" host="localhost" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.836 [INFO][5139] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" host="localhost" Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.836 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:29.861606 containerd[1440]: 2025-07-14 22:02:29.836 [INFO][5139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" HandleID="k8s-pod-network.c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:29.862132 containerd[1440]: 2025-07-14 22:02:29.841 [INFO][5124] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Namespace="calico-system" Pod="calico-kube-controllers-568d8c6dc9-hlgkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0", GenerateName:"calico-kube-controllers-568d8c6dc9-", Namespace:"calico-system", SelfLink:"", UID:"69b2a205-08a9-48bb-b9c3-874e85d81984", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"568d8c6dc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-568d8c6dc9-hlgkn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid7c5701acc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:29.862132 containerd[1440]: 2025-07-14 22:02:29.841 [INFO][5124] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Namespace="calico-system" Pod="calico-kube-controllers-568d8c6dc9-hlgkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:29.862132 containerd[1440]: 2025-07-14 22:02:29.841 [INFO][5124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7c5701acc8 ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Namespace="calico-system" Pod="calico-kube-controllers-568d8c6dc9-hlgkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:29.862132 containerd[1440]: 2025-07-14 22:02:29.843 [INFO][5124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Namespace="calico-system" Pod="calico-kube-controllers-568d8c6dc9-hlgkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:29.862132 containerd[1440]: 2025-07-14 22:02:29.844 [INFO][5124] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Namespace="calico-system" Pod="calico-kube-controllers-568d8c6dc9-hlgkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0", GenerateName:"calico-kube-controllers-568d8c6dc9-", Namespace:"calico-system", SelfLink:"", UID:"69b2a205-08a9-48bb-b9c3-874e85d81984", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"568d8c6dc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443", Pod:"calico-kube-controllers-568d8c6dc9-hlgkn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid7c5701acc8", MAC:"d2:dd:05:11:35:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:29.862132 containerd[1440]: 2025-07-14 22:02:29.853 [INFO][5124] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443" Namespace="calico-system" Pod="calico-kube-controllers-568d8c6dc9-hlgkn" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:29.886335 containerd[1440]: time="2025-07-14T22:02:29.885801353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:02:29.886335 containerd[1440]: time="2025-07-14T22:02:29.885863474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:02:29.886335 containerd[1440]: time="2025-07-14T22:02:29.885874514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:29.886335 containerd[1440]: time="2025-07-14T22:02:29.885961715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:02:29.921772 systemd[1]: Started cri-containerd-c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443.scope - libcontainer container c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443. Jul 14 22:02:29.933512 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:02:29.953053 containerd[1440]: time="2025-07-14T22:02:29.953010671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-568d8c6dc9-hlgkn,Uid:69b2a205-08a9-48bb-b9c3-874e85d81984,Namespace:calico-system,Attempt:1,} returns sandbox id \"c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443\"" Jul 14 22:02:30.811610 systemd-networkd[1380]: cali782d4fad475: Gained IPv6LL Jul 14 22:02:30.834927 kubelet[2535]: E0714 22:02:30.834894 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:30.835956 kubelet[2535]: E0714 22:02:30.835522 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:31.068449 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4156087426.mount: Deactivated successfully. Jul 14 22:02:31.443515 containerd[1440]: time="2025-07-14T22:02:31.442848642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:31.443515 containerd[1440]: time="2025-07-14T22:02:31.443417410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 14 22:02:31.444413 containerd[1440]: time="2025-07-14T22:02:31.444357382Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:31.446585 containerd[1440]: time="2025-07-14T22:02:31.446531210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:31.447475 containerd[1440]: time="2025-07-14T22:02:31.447411062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.226988516s" Jul 14 22:02:31.447475 containerd[1440]: time="2025-07-14T22:02:31.447445382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 14 22:02:31.448483 containerd[1440]: time="2025-07-14T22:02:31.448404475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 22:02:31.449710 containerd[1440]: time="2025-07-14T22:02:31.449634571Z" level=info msg="CreateContainer within sandbox \"576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 22:02:31.462812 containerd[1440]: time="2025-07-14T22:02:31.462723101Z" level=info msg="CreateContainer within sandbox \"576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"894bc7155c1f129f65208fd09d6e4274a1d93577d801bb20e939c07d78e8b4d5\"" Jul 14 22:02:31.463559 containerd[1440]: time="2025-07-14T22:02:31.463445670Z" level=info msg="StartContainer for \"894bc7155c1f129f65208fd09d6e4274a1d93577d801bb20e939c07d78e8b4d5\"" Jul 14 22:02:31.491626 systemd[1]: Started cri-containerd-894bc7155c1f129f65208fd09d6e4274a1d93577d801bb20e939c07d78e8b4d5.scope - libcontainer container 894bc7155c1f129f65208fd09d6e4274a1d93577d801bb20e939c07d78e8b4d5. Jul 14 22:02:31.531673 containerd[1440]: time="2025-07-14T22:02:31.531633397Z" level=info msg="StartContainer for \"894bc7155c1f129f65208fd09d6e4274a1d93577d801bb20e939c07d78e8b4d5\" returns successfully" Jul 14 22:02:31.771636 systemd-networkd[1380]: calid7c5701acc8: Gained IPv6LL Jul 14 22:02:31.810624 kubelet[2535]: I0714 22:02:31.810570 2535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:02:31.856232 kubelet[2535]: E0714 22:02:31.856188 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:32.110913 kubelet[2535]: I0714 22:02:32.110792 2535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:02:32.111211 kubelet[2535]: E0714 22:02:32.111186 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:32.134103 kubelet[2535]: I0714 22:02:32.134035 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-66djn" podStartSLOduration=22.055202678 podStartE2EDuration="25.134015712s" podCreationTimestamp="2025-07-14 22:02:07 +0000 UTC" firstStartedPulling="2025-07-14 22:02:28.369436399 +0000 UTC m=+45.847423123" lastFinishedPulling="2025-07-14 22:02:31.448249433 +0000 UTC m=+48.926236157" observedRunningTime="2025-07-14 22:02:31.869554832 +0000 UTC m=+49.347541596" watchObservedRunningTime="2025-07-14 22:02:32.134015712 +0000 UTC m=+49.612002436" Jul 14 22:02:32.673474 kernel: bpftool[5393]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 22:02:32.700726 containerd[1440]: time="2025-07-14T22:02:32.700656342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:32.701627 containerd[1440]: time="2025-07-14T22:02:32.701567394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 14 22:02:32.702726 containerd[1440]: time="2025-07-14T22:02:32.702692807Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:32.705893 containerd[1440]: time="2025-07-14T22:02:32.705811206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:32.706783 containerd[1440]: time="2025-07-14T22:02:32.706745338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.258310703s" Jul 14 22:02:32.706858 containerd[1440]: time="2025-07-14T22:02:32.706787978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 14 22:02:32.708619 containerd[1440]: time="2025-07-14T22:02:32.708153875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 22:02:32.710099 containerd[1440]: time="2025-07-14T22:02:32.709997858Z" level=info msg="CreateContainer within sandbox \"d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 22:02:32.724828 containerd[1440]: time="2025-07-14T22:02:32.724783322Z" level=info msg="CreateContainer within sandbox \"d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c219bcf19e43632c1304f0294eabf9d9a2f4e825e4ba5ddf4d4720c47d66c9f4\"" Jul 14 22:02:32.727996 containerd[1440]: time="2025-07-14T22:02:32.726633145Z" level=info msg="StartContainer for \"c219bcf19e43632c1304f0294eabf9d9a2f4e825e4ba5ddf4d4720c47d66c9f4\"" Jul 14 22:02:32.763633 systemd[1]: Started cri-containerd-c219bcf19e43632c1304f0294eabf9d9a2f4e825e4ba5ddf4d4720c47d66c9f4.scope - libcontainer container c219bcf19e43632c1304f0294eabf9d9a2f4e825e4ba5ddf4d4720c47d66c9f4. Jul 14 22:02:32.790344 containerd[1440]: time="2025-07-14T22:02:32.790287854Z" level=info msg="StartContainer for \"c219bcf19e43632c1304f0294eabf9d9a2f4e825e4ba5ddf4d4720c47d66c9f4\" returns successfully" Jul 14 22:02:32.864492 kubelet[2535]: E0714 22:02:32.862157 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:02:32.943939 systemd-networkd[1380]: vxlan.calico: Link UP Jul 14 22:02:32.943945 systemd-networkd[1380]: vxlan.calico: Gained carrier Jul 14 22:02:33.689057 kubelet[2535]: I0714 22:02:33.689014 2535 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 22:02:33.690664 kubelet[2535]: I0714 22:02:33.690401 2535 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 22:02:34.459621 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Jul 14 22:02:34.715521 containerd[1440]: time="2025-07-14T22:02:34.715142786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:34.716270 containerd[1440]: time="2025-07-14T22:02:34.715755753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 14 22:02:34.717002 containerd[1440]: time="2025-07-14T22:02:34.716929086Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:34.719009 containerd[1440]: time="2025-07-14T22:02:34.718936949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:02:34.719768 containerd[1440]: time="2025-07-14T22:02:34.719671117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.011477001s" Jul 14 22:02:34.719768 containerd[1440]: time="2025-07-14T22:02:34.719705598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 14 22:02:34.728027 containerd[1440]: time="2025-07-14T22:02:34.727987131Z" level=info msg="CreateContainer within sandbox \"c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 22:02:34.738864 containerd[1440]: time="2025-07-14T22:02:34.738797453Z" level=info msg="CreateContainer within sandbox \"c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6d7990e3f14c3b8e7ad8d7f05280af930f1e9e2ce0644a575ce0d1687483569b\"" Jul 14 22:02:34.740927 containerd[1440]: time="2025-07-14T22:02:34.740043507Z" level=info msg="StartContainer for \"6d7990e3f14c3b8e7ad8d7f05280af930f1e9e2ce0644a575ce0d1687483569b\"" Jul 14 22:02:34.780680 systemd[1]: Started cri-containerd-6d7990e3f14c3b8e7ad8d7f05280af930f1e9e2ce0644a575ce0d1687483569b.scope - libcontainer container 6d7990e3f14c3b8e7ad8d7f05280af930f1e9e2ce0644a575ce0d1687483569b. Jul 14 22:02:34.810918 containerd[1440]: time="2025-07-14T22:02:34.810810504Z" level=info msg="StartContainer for \"6d7990e3f14c3b8e7ad8d7f05280af930f1e9e2ce0644a575ce0d1687483569b\" returns successfully" Jul 14 22:02:34.915413 kubelet[2535]: I0714 22:02:34.915321 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-568d8c6dc9-hlgkn" podStartSLOduration=23.152499612 podStartE2EDuration="27.915304122s" podCreationTimestamp="2025-07-14 22:02:07 +0000 UTC" firstStartedPulling="2025-07-14 22:02:29.957543655 +0000 UTC m=+47.435530339" lastFinishedPulling="2025-07-14 22:02:34.720348125 +0000 UTC m=+52.198334849" observedRunningTime="2025-07-14 22:02:34.903376427 +0000 UTC m=+52.381363151" watchObservedRunningTime="2025-07-14 22:02:34.915304122 +0000 UTC m=+52.393290846" Jul 14 22:02:34.916098 kubelet[2535]: I0714 22:02:34.916024 2535 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qbwq6" podStartSLOduration=23.431096273 podStartE2EDuration="27.91601253s" podCreationTimestamp="2025-07-14 22:02:07 +0000 UTC" firstStartedPulling="2025-07-14 22:02:28.223128257 +0000 UTC m=+45.701114941" lastFinishedPulling="2025-07-14 22:02:32.708044474 +0000 UTC m=+50.186031198" observedRunningTime="2025-07-14 22:02:32.874215656 +0000 UTC m=+50.352202380" watchObservedRunningTime="2025-07-14 22:02:34.91601253 +0000 UTC m=+52.393999294" Jul 14 22:02:42.607166 containerd[1440]: time="2025-07-14T22:02:42.607127200Z" level=info msg="StopPodSandbox for \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\"" Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.651 [WARNING][5706] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0", GenerateName:"calico-kube-controllers-568d8c6dc9-", Namespace:"calico-system", SelfLink:"", UID:"69b2a205-08a9-48bb-b9c3-874e85d81984", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"568d8c6dc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443", Pod:"calico-kube-controllers-568d8c6dc9-hlgkn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid7c5701acc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.652 [INFO][5706] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.652 [INFO][5706] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" iface="eth0" netns="" Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.652 [INFO][5706] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.652 [INFO][5706] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.673 [INFO][5715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" HandleID="k8s-pod-network.c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.673 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.673 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.681 [WARNING][5715] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" HandleID="k8s-pod-network.c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.681 [INFO][5715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" HandleID="k8s-pod-network.c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.683 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:42.687507 containerd[1440]: 2025-07-14 22:02:42.685 [INFO][5706] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:42.687997 containerd[1440]: time="2025-07-14T22:02:42.687534953Z" level=info msg="TearDown network for sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\" successfully" Jul 14 22:02:42.687997 containerd[1440]: time="2025-07-14T22:02:42.687559913Z" level=info msg="StopPodSandbox for \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\" returns successfully" Jul 14 22:02:42.688140 containerd[1440]: time="2025-07-14T22:02:42.688094437Z" level=info msg="RemovePodSandbox for \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\"" Jul 14 22:02:42.693311 containerd[1440]: time="2025-07-14T22:02:42.693268915Z" level=info msg="Forcibly stopping sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\"" Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.728 [WARNING][5733] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0", GenerateName:"calico-kube-controllers-568d8c6dc9-", Namespace:"calico-system", SelfLink:"", UID:"69b2a205-08a9-48bb-b9c3-874e85d81984", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"568d8c6dc9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c03734d351b5866ba4ae6571a9de4e4798c5a86179f0ccf9754584b302162443", Pod:"calico-kube-controllers-568d8c6dc9-hlgkn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid7c5701acc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.728 [INFO][5733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.728 [INFO][5733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" iface="eth0" netns="" Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.728 [INFO][5733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.728 [INFO][5733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.745 [INFO][5742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" HandleID="k8s-pod-network.c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.746 [INFO][5742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.746 [INFO][5742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.754 [WARNING][5742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" HandleID="k8s-pod-network.c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.754 [INFO][5742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" HandleID="k8s-pod-network.c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Workload="localhost-k8s-calico--kube--controllers--568d8c6dc9--hlgkn-eth0" Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.756 [INFO][5742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:42.759661 containerd[1440]: 2025-07-14 22:02:42.757 [INFO][5733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6" Jul 14 22:02:42.760068 containerd[1440]: time="2025-07-14T22:02:42.759706405Z" level=info msg="TearDown network for sandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\" successfully" Jul 14 22:02:42.773538 containerd[1440]: time="2025-07-14T22:02:42.773482786Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:02:42.773626 containerd[1440]: time="2025-07-14T22:02:42.773577747Z" level=info msg="RemovePodSandbox \"c1eecb87c8a2f23bb4bc4d862250027dac78f8787d31b88c0e5573be04b6f1a6\" returns successfully" Jul 14 22:02:42.774159 containerd[1440]: time="2025-07-14T22:02:42.774134431Z" level=info msg="StopPodSandbox for \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\"" Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.808 [WARNING][5759] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qbwq6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88e059ee-2b3a-4b57-8789-ebeef41ce071", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c", Pod:"csi-node-driver-qbwq6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali82628ede15e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.808 [INFO][5759] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.808 [INFO][5759] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" iface="eth0" netns="" Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.808 [INFO][5759] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.808 [INFO][5759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.831 [INFO][5768] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" HandleID="k8s-pod-network.43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.831 [INFO][5768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.831 [INFO][5768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.841 [WARNING][5768] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" HandleID="k8s-pod-network.43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.841 [INFO][5768] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" HandleID="k8s-pod-network.43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.842 [INFO][5768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:42.846072 containerd[1440]: 2025-07-14 22:02:42.844 [INFO][5759] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:42.846687 containerd[1440]: time="2025-07-14T22:02:42.846104642Z" level=info msg="TearDown network for sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\" successfully" Jul 14 22:02:42.846687 containerd[1440]: time="2025-07-14T22:02:42.846130842Z" level=info msg="StopPodSandbox for \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\" returns successfully" Jul 14 22:02:42.846687 containerd[1440]: time="2025-07-14T22:02:42.846558925Z" level=info msg="RemovePodSandbox for \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\"" Jul 14 22:02:42.846687 containerd[1440]: time="2025-07-14T22:02:42.846591245Z" level=info msg="Forcibly stopping sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\"" Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.879 [WARNING][5788] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qbwq6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88e059ee-2b3a-4b57-8789-ebeef41ce071", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d8809b7d6fcdd89387e6fc0992787a38fc13089464918a42f95ec1415e75e81c", Pod:"csi-node-driver-qbwq6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali82628ede15e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.879 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.879 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" iface="eth0" netns="" Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.879 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.879 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.898 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" HandleID="k8s-pod-network.43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.898 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.898 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.906 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" HandleID="k8s-pod-network.43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.906 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" HandleID="k8s-pod-network.43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Workload="localhost-k8s-csi--node--driver--qbwq6-eth0" Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.908 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:42.913676 containerd[1440]: 2025-07-14 22:02:42.911 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e" Jul 14 22:02:42.913676 containerd[1440]: time="2025-07-14T22:02:42.913637340Z" level=info msg="TearDown network for sandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\" successfully" Jul 14 22:02:42.917240 containerd[1440]: time="2025-07-14T22:02:42.917194246Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:02:42.917319 containerd[1440]: time="2025-07-14T22:02:42.917254246Z" level=info msg="RemovePodSandbox \"43d9daef8e31e6e2757e4202d804222bddf3399deb51f8b6bf5adab2c954de1e\" returns successfully" Jul 14 22:02:42.917657 containerd[1440]: time="2025-07-14T22:02:42.917622409Z" level=info msg="StopPodSandbox for \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\"" Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.950 [WARNING][5814] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wftww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"07dad9bf-62f6-44ab-88b7-926fd88e9c73", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea", Pod:"coredns-7c65d6cfc9-wftww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali782d4fad475", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.951 [INFO][5814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.951 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" iface="eth0" netns="" Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.951 [INFO][5814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.951 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.970 [INFO][5823] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" HandleID="k8s-pod-network.ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.970 [INFO][5823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.970 [INFO][5823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.979 [WARNING][5823] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" HandleID="k8s-pod-network.ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.979 [INFO][5823] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" HandleID="k8s-pod-network.ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.981 [INFO][5823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:42.984535 containerd[1440]: 2025-07-14 22:02:42.982 [INFO][5814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:42.984956 containerd[1440]: time="2025-07-14T22:02:42.984577183Z" level=info msg="TearDown network for sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\" successfully" Jul 14 22:02:42.984956 containerd[1440]: time="2025-07-14T22:02:42.984602463Z" level=info msg="StopPodSandbox for \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\" returns successfully" Jul 14 22:02:42.985116 containerd[1440]: time="2025-07-14T22:02:42.985078627Z" level=info msg="RemovePodSandbox for \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\"" Jul 14 22:02:42.985116 containerd[1440]: time="2025-07-14T22:02:42.985113147Z" level=info msg="Forcibly stopping sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\"" Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.017 [WARNING][5841] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wftww-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"07dad9bf-62f6-44ab-88b7-926fd88e9c73", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"efb8f838e3dedc0b0a4929b9b44a6ee3c3941a7d22ac3e06f9692d23f14d86ea", Pod:"coredns-7c65d6cfc9-wftww", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali782d4fad475", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.017 [INFO][5841] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.017 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" iface="eth0" netns="" Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.017 [INFO][5841] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.017 [INFO][5841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.034 [INFO][5850] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" HandleID="k8s-pod-network.ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.035 [INFO][5850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.035 [INFO][5850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.043 [WARNING][5850] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" HandleID="k8s-pod-network.ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.043 [INFO][5850] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" HandleID="k8s-pod-network.ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Workload="localhost-k8s-coredns--7c65d6cfc9--wftww-eth0" Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.045 [INFO][5850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.049323 containerd[1440]: 2025-07-14 22:02:43.047 [INFO][5841] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80" Jul 14 22:02:43.049772 containerd[1440]: time="2025-07-14T22:02:43.049358480Z" level=info msg="TearDown network for sandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\" successfully" Jul 14 22:02:43.052066 containerd[1440]: time="2025-07-14T22:02:43.052038099Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:02:43.052128 containerd[1440]: time="2025-07-14T22:02:43.052113500Z" level=info msg="RemovePodSandbox \"ea4ab6ce23f7443f45abf7cbfc54592c3d683a12fe40064a23de61c0c08bdb80\" returns successfully" Jul 14 22:02:43.052766 containerd[1440]: time="2025-07-14T22:02:43.052740984Z" level=info msg="StopPodSandbox for \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\"" Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.084 [WARNING][5868] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0", GenerateName:"calico-apiserver-687547dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"db3be38c-70ff-4df0-a2d5-d0462c499962", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687547dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33", Pod:"calico-apiserver-687547dbff-8vw7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54e4f361bab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.084 [INFO][5868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.084 [INFO][5868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" iface="eth0" netns="" Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.084 [INFO][5868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.084 [INFO][5868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.102 [INFO][5876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" HandleID="k8s-pod-network.24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.102 [INFO][5876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.102 [INFO][5876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.110 [WARNING][5876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" HandleID="k8s-pod-network.24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.110 [INFO][5876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" HandleID="k8s-pod-network.24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.112 [INFO][5876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.115915 containerd[1440]: 2025-07-14 22:02:43.113 [INFO][5868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:43.116346 containerd[1440]: time="2025-07-14T22:02:43.115951303Z" level=info msg="TearDown network for sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\" successfully" Jul 14 22:02:43.116346 containerd[1440]: time="2025-07-14T22:02:43.115975944Z" level=info msg="StopPodSandbox for \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\" returns successfully" Jul 14 22:02:43.116410 containerd[1440]: time="2025-07-14T22:02:43.116382626Z" level=info msg="RemovePodSandbox for \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\"" Jul 14 22:02:43.116437 containerd[1440]: time="2025-07-14T22:02:43.116416907Z" level=info msg="Forcibly stopping sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\"" Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.148 [WARNING][5894] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0", GenerateName:"calico-apiserver-687547dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"db3be38c-70ff-4df0-a2d5-d0462c499962", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687547dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0e55c293e117d26ea19cf2c8241bf9507187a8641e68b074356b51136dfdf33", Pod:"calico-apiserver-687547dbff-8vw7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54e4f361bab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.148 [INFO][5894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.148 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" iface="eth0" netns="" Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.148 [INFO][5894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.148 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.167 [INFO][5903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" HandleID="k8s-pod-network.24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.167 [INFO][5903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.167 [INFO][5903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.176 [WARNING][5903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" HandleID="k8s-pod-network.24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.176 [INFO][5903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" HandleID="k8s-pod-network.24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Workload="localhost-k8s-calico--apiserver--687547dbff--8vw7r-eth0" Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.179 [INFO][5903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.183588 containerd[1440]: 2025-07-14 22:02:43.181 [INFO][5894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f" Jul 14 22:02:43.183588 containerd[1440]: time="2025-07-14T22:02:43.183564934Z" level=info msg="TearDown network for sandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\" successfully" Jul 14 22:02:43.187943 containerd[1440]: time="2025-07-14T22:02:43.187219839Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:02:43.188044 containerd[1440]: time="2025-07-14T22:02:43.188019524Z" level=info msg="RemovePodSandbox \"24ce30e00798cd68ea5fb846199c0f7ef474417e8b6e3f2c71af8e13b600dc9f\" returns successfully" Jul 14 22:02:43.188589 containerd[1440]: time="2025-07-14T22:02:43.188562528Z" level=info msg="StopPodSandbox for \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\"" Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.228 [WARNING][5925] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7c8f785c-fa60-472e-a6e1-a21274af8925", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1", Pod:"coredns-7c65d6cfc9-ss2ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice22738fd7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.228 [INFO][5925] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.228 [INFO][5925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" iface="eth0" netns="" Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.228 [INFO][5925] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.228 [INFO][5925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.247 [INFO][5938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" HandleID="k8s-pod-network.eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.247 [INFO][5938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.247 [INFO][5938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.260 [WARNING][5938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" HandleID="k8s-pod-network.eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.260 [INFO][5938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" HandleID="k8s-pod-network.eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.262 [INFO][5938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.265759 containerd[1440]: 2025-07-14 22:02:43.263 [INFO][5925] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:43.265759 containerd[1440]: time="2025-07-14T22:02:43.265616784Z" level=info msg="TearDown network for sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\" successfully" Jul 14 22:02:43.265759 containerd[1440]: time="2025-07-14T22:02:43.265645744Z" level=info msg="StopPodSandbox for \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\" returns successfully" Jul 14 22:02:43.266643 containerd[1440]: time="2025-07-14T22:02:43.266353549Z" level=info msg="RemovePodSandbox for \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\"" Jul 14 22:02:43.266643 containerd[1440]: time="2025-07-14T22:02:43.266386709Z" level=info msg="Forcibly stopping sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\"" Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.301 [WARNING][5955] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7c8f785c-fa60-472e-a6e1-a21274af8925", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 1, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8acffcb6e4d0bc99ce441f1b6c59ba90e638982cb7ab27a330c68cef985aeca1", Pod:"coredns-7c65d6cfc9-ss2ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice22738fd7f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.302 [INFO][5955] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.302 [INFO][5955] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" iface="eth0" netns="" Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.302 [INFO][5955] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.302 [INFO][5955] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.321 [INFO][5964] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" HandleID="k8s-pod-network.eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.321 [INFO][5964] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.321 [INFO][5964] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.330 [WARNING][5964] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" HandleID="k8s-pod-network.eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.330 [INFO][5964] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" HandleID="k8s-pod-network.eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Workload="localhost-k8s-coredns--7c65d6cfc9--ss2ss-eth0" Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.331 [INFO][5964] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.335002 containerd[1440]: 2025-07-14 22:02:43.333 [INFO][5955] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396" Jul 14 22:02:43.336484 containerd[1440]: time="2025-07-14T22:02:43.335438309Z" level=info msg="TearDown network for sandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\" successfully" Jul 14 22:02:43.338359 containerd[1440]: time="2025-07-14T22:02:43.338318609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:02:43.338415 containerd[1440]: time="2025-07-14T22:02:43.338390210Z" level=info msg="RemovePodSandbox \"eac3c6392146a44b11cfda702ed710b25ba1448f17274da0d3546372f7b77396\" returns successfully" Jul 14 22:02:43.339011 containerd[1440]: time="2025-07-14T22:02:43.338975254Z" level=info msg="StopPodSandbox for \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\"" Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.373 [WARNING][5983] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0", GenerateName:"calico-apiserver-687547dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"b98ca949-7af6-44b4-b15a-f51c51b97182", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687547dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c", Pod:"calico-apiserver-687547dbff-nxmhz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8902523bf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.373 [INFO][5983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.373 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" iface="eth0" netns="" Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.373 [INFO][5983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.373 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.391 [INFO][5992] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" HandleID="k8s-pod-network.8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.391 [INFO][5992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.391 [INFO][5992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.399 [WARNING][5992] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" HandleID="k8s-pod-network.8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.399 [INFO][5992] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" HandleID="k8s-pod-network.8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.401 [INFO][5992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.404819 containerd[1440]: 2025-07-14 22:02:43.402 [INFO][5983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:43.405211 containerd[1440]: time="2025-07-14T22:02:43.404860912Z" level=info msg="TearDown network for sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\" successfully" Jul 14 22:02:43.405211 containerd[1440]: time="2025-07-14T22:02:43.404890792Z" level=info msg="StopPodSandbox for \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\" returns successfully" Jul 14 22:02:43.405664 containerd[1440]: time="2025-07-14T22:02:43.405631717Z" level=info msg="RemovePodSandbox for \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\"" Jul 14 22:02:43.405720 containerd[1440]: time="2025-07-14T22:02:43.405669118Z" level=info msg="Forcibly stopping sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\"" Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.436 [WARNING][6010] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0", GenerateName:"calico-apiserver-687547dbff-", Namespace:"calico-apiserver", SelfLink:"", UID:"b98ca949-7af6-44b4-b15a-f51c51b97182", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"687547dbff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1e20d6b7325d9f02c1d58f3b82fa6f81d8829366145eca453b0e1ca2363d881c", Pod:"calico-apiserver-687547dbff-nxmhz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid8902523bf7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.437 [INFO][6010] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.437 [INFO][6010] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" iface="eth0" netns="" Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.437 [INFO][6010] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.437 [INFO][6010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.454 [INFO][6019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" HandleID="k8s-pod-network.8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.454 [INFO][6019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.454 [INFO][6019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.465 [WARNING][6019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" HandleID="k8s-pod-network.8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.465 [INFO][6019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" HandleID="k8s-pod-network.8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Workload="localhost-k8s-calico--apiserver--687547dbff--nxmhz-eth0" Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.466 [INFO][6019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.470544 containerd[1440]: 2025-07-14 22:02:43.468 [INFO][6010] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad" Jul 14 22:02:43.470544 containerd[1440]: time="2025-07-14T22:02:43.470522289Z" level=info msg="TearDown network for sandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\" successfully" Jul 14 22:02:43.506727 containerd[1440]: time="2025-07-14T22:02:43.506680900Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:02:43.506846 containerd[1440]: time="2025-07-14T22:02:43.506771381Z" level=info msg="RemovePodSandbox \"8521efbb9a71cfef1e2b1636255f567e0a8aec28221ff2be2498c137bbb232ad\" returns successfully" Jul 14 22:02:43.507267 containerd[1440]: time="2025-07-14T22:02:43.507227944Z" level=info msg="StopPodSandbox for \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\"" Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.540 [WARNING][6037] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--66djn-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d2664461-d898-4ff2-850a-8e3d73709f9a", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a", Pod:"goldmane-58fd7646b9-66djn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliae04b4262bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.540 [INFO][6037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.540 [INFO][6037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" iface="eth0" netns="" Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.540 [INFO][6037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.540 [INFO][6037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.559 [INFO][6046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" HandleID="k8s-pod-network.195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.559 [INFO][6046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.559 [INFO][6046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.567 [WARNING][6046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" HandleID="k8s-pod-network.195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.567 [INFO][6046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" HandleID="k8s-pod-network.195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.569 [INFO][6046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.573044 containerd[1440]: 2025-07-14 22:02:43.571 [INFO][6037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:43.573044 containerd[1440]: time="2025-07-14T22:02:43.573021761Z" level=info msg="TearDown network for sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\" successfully" Jul 14 22:02:43.573044 containerd[1440]: time="2025-07-14T22:02:43.573046001Z" level=info msg="StopPodSandbox for \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\" returns successfully" Jul 14 22:02:43.573584 containerd[1440]: time="2025-07-14T22:02:43.573467284Z" level=info msg="RemovePodSandbox for \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\"" Jul 14 22:02:43.573584 containerd[1440]: time="2025-07-14T22:02:43.573505245Z" level=info msg="Forcibly stopping sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\"" Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.605 [WARNING][6064] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--66djn-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"d2664461-d898-4ff2-850a-8e3d73709f9a", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 2, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"576117d529d322b1829fd8f9dc462f042bd2981b449d8cf4b4129bb167979e1a", Pod:"goldmane-58fd7646b9-66djn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliae04b4262bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.605 [INFO][6064] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.605 [INFO][6064] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" iface="eth0" netns="" Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.605 [INFO][6064] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.605 [INFO][6064] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.622 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" HandleID="k8s-pod-network.195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.622 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.622 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.630 [WARNING][6072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" HandleID="k8s-pod-network.195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.630 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" HandleID="k8s-pod-network.195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Workload="localhost-k8s-goldmane--58fd7646b9--66djn-eth0" Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.631 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.635575 containerd[1440]: 2025-07-14 22:02:43.633 [INFO][6064] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c" Jul 14 22:02:43.636162 containerd[1440]: time="2025-07-14T22:02:43.635615516Z" level=info msg="TearDown network for sandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\" successfully" Jul 14 22:02:43.638500 containerd[1440]: time="2025-07-14T22:02:43.638446536Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:02:43.638558 containerd[1440]: time="2025-07-14T22:02:43.638541617Z" level=info msg="RemovePodSandbox \"195c1b280c90b79092826cb10f21f75231cae620b33894aff0421b646ad6756c\" returns successfully" Jul 14 22:02:43.639030 containerd[1440]: time="2025-07-14T22:02:43.639005700Z" level=info msg="StopPodSandbox for \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\"" Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.671 [WARNING][6090] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" WorkloadEndpoint="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.671 [INFO][6090] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.671 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" iface="eth0" netns="" Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.671 [INFO][6090] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.672 [INFO][6090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.689 [INFO][6100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" HandleID="k8s-pod-network.7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Workload="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.689 [INFO][6100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.689 [INFO][6100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.697 [WARNING][6100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" HandleID="k8s-pod-network.7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Workload="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.697 [INFO][6100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" HandleID="k8s-pod-network.7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Workload="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.699 [INFO][6100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.702673 containerd[1440]: 2025-07-14 22:02:43.701 [INFO][6090] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:43.703030 containerd[1440]: time="2025-07-14T22:02:43.702723943Z" level=info msg="TearDown network for sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\" successfully" Jul 14 22:02:43.703030 containerd[1440]: time="2025-07-14T22:02:43.702763743Z" level=info msg="StopPodSandbox for \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\" returns successfully" Jul 14 22:02:43.703319 containerd[1440]: time="2025-07-14T22:02:43.703295307Z" level=info msg="RemovePodSandbox for \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\"" Jul 14 22:02:43.703374 containerd[1440]: time="2025-07-14T22:02:43.703329507Z" level=info msg="Forcibly stopping sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\"" Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.739 [WARNING][6118] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" WorkloadEndpoint="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.739 [INFO][6118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.739 [INFO][6118] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" iface="eth0" netns="" Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.739 [INFO][6118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.739 [INFO][6118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.758 [INFO][6127] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" HandleID="k8s-pod-network.7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Workload="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.758 [INFO][6127] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.758 [INFO][6127] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.767 [WARNING][6127] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" HandleID="k8s-pod-network.7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Workload="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.767 [INFO][6127] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" HandleID="k8s-pod-network.7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Workload="localhost-k8s-whisker--78b4f5fbc4--jkv4w-eth0" Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.768 [INFO][6127] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:02:43.772756 containerd[1440]: 2025-07-14 22:02:43.770 [INFO][6118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad" Jul 14 22:02:43.773106 containerd[1440]: time="2025-07-14T22:02:43.772787750Z" level=info msg="TearDown network for sandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\" successfully" Jul 14 22:02:43.775353 containerd[1440]: time="2025-07-14T22:02:43.775313808Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:02:43.775400 containerd[1440]: time="2025-07-14T22:02:43.775383448Z" level=info msg="RemovePodSandbox \"7a145e603859e43641a96fc56a4778abdfde56f8283cf9e214ff6447218ad9ad\" returns successfully" Jul 14 22:02:56.352394 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:38208.service - OpenSSH per-connection server daemon (10.0.0.1:38208). Jul 14 22:02:56.400251 sshd[6215]: Accepted publickey for core from 10.0.0.1 port 38208 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:02:56.402041 sshd[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:02:56.405775 systemd-logind[1420]: New session 8 of user core. Jul 14 22:02:56.416614 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 22:02:56.895044 sshd[6215]: pam_unix(sshd:session): session closed for user core Jul 14 22:02:56.898390 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:38208.service: Deactivated successfully. Jul 14 22:02:56.900650 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:02:56.901336 systemd-logind[1420]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:02:56.902187 systemd-logind[1420]: Removed session 8. Jul 14 22:02:59.335050 kubelet[2535]: I0714 22:02:59.335004 2535 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:03:01.907282 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:38230.service - OpenSSH per-connection server daemon (10.0.0.1:38230). Jul 14 22:03:01.947838 sshd[6258]: Accepted publickey for core from 10.0.0.1 port 38230 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:01.949226 sshd[6258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:01.955529 systemd-logind[1420]: New session 9 of user core. Jul 14 22:03:01.965667 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 22:03:02.139345 sshd[6258]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:02.142820 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:38230.service: Deactivated successfully. Jul 14 22:03:02.146064 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:03:02.146815 systemd-logind[1420]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:03:02.147613 systemd-logind[1420]: Removed session 9. Jul 14 22:03:05.609206 kubelet[2535]: E0714 22:03:05.609167 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:03:07.150581 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:35716.service - OpenSSH per-connection server daemon (10.0.0.1:35716). Jul 14 22:03:07.191401 sshd[6273]: Accepted publickey for core from 10.0.0.1 port 35716 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:07.193110 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:07.196639 systemd-logind[1420]: New session 10 of user core. Jul 14 22:03:07.208595 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 22:03:07.356799 sshd[6273]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:07.360728 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:35716.service: Deactivated successfully. Jul 14 22:03:07.362652 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:03:07.364033 systemd-logind[1420]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:03:07.365928 systemd-logind[1420]: Removed session 10. Jul 14 22:03:09.608920 kubelet[2535]: E0714 22:03:09.608818 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:03:12.374963 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:35764.service - OpenSSH per-connection server daemon (10.0.0.1:35764). Jul 14 22:03:12.430799 sshd[6288]: Accepted publickey for core from 10.0.0.1 port 35764 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:12.432707 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:12.440046 systemd-logind[1420]: New session 11 of user core. Jul 14 22:03:12.445693 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 22:03:12.714082 sshd[6288]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:12.718137 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:35764.service: Deactivated successfully. Jul 14 22:03:12.720537 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:03:12.722295 systemd-logind[1420]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:03:12.723412 systemd-logind[1420]: Removed session 11. Jul 14 22:03:17.609216 kubelet[2535]: E0714 22:03:17.609165 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:03:17.738731 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:34174.service - OpenSSH per-connection server daemon (10.0.0.1:34174). Jul 14 22:03:17.775140 sshd[6355]: Accepted publickey for core from 10.0.0.1 port 34174 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:17.776406 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:17.783927 systemd-logind[1420]: New session 12 of user core. Jul 14 22:03:17.791611 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 22:03:17.958063 sshd[6355]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:17.968182 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:34174.service: Deactivated successfully. Jul 14 22:03:17.970113 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:03:17.972394 systemd-logind[1420]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:03:17.983871 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:34190.service - OpenSSH per-connection server daemon (10.0.0.1:34190). Jul 14 22:03:17.985329 systemd-logind[1420]: Removed session 12. Jul 14 22:03:18.014565 sshd[6375]: Accepted publickey for core from 10.0.0.1 port 34190 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:18.015883 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:18.020053 systemd-logind[1420]: New session 13 of user core. Jul 14 22:03:18.027603 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 22:03:18.277054 sshd[6375]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:18.293443 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:34190.service: Deactivated successfully. Jul 14 22:03:18.298383 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:03:18.300699 systemd-logind[1420]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:03:18.312431 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:34204.service - OpenSSH per-connection server daemon (10.0.0.1:34204). Jul 14 22:03:18.316715 systemd-logind[1420]: Removed session 13. Jul 14 22:03:18.346488 sshd[6387]: Accepted publickey for core from 10.0.0.1 port 34204 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:18.348139 sshd[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:18.352223 systemd-logind[1420]: New session 14 of user core. Jul 14 22:03:18.366642 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 22:03:18.538732 sshd[6387]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:18.543158 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:34204.service: Deactivated successfully. Jul 14 22:03:18.545384 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:03:18.547212 systemd-logind[1420]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:03:18.548263 systemd-logind[1420]: Removed session 14. Jul 14 22:03:18.609513 kubelet[2535]: E0714 22:03:18.609407 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:03:23.550473 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:52962.service - OpenSSH per-connection server daemon (10.0.0.1:52962). Jul 14 22:03:23.585744 sshd[6424]: Accepted publickey for core from 10.0.0.1 port 52962 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:23.587155 sshd[6424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:23.591596 systemd-logind[1420]: New session 15 of user core. Jul 14 22:03:23.602691 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 22:03:23.735680 sshd[6424]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:23.745687 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:52962.service: Deactivated successfully. Jul 14 22:03:23.749556 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:03:23.751310 systemd-logind[1420]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:03:23.759805 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:52978.service - OpenSSH per-connection server daemon (10.0.0.1:52978). Jul 14 22:03:23.761303 systemd-logind[1420]: Removed session 15. Jul 14 22:03:23.793064 sshd[6438]: Accepted publickey for core from 10.0.0.1 port 52978 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:23.794835 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:23.799227 systemd-logind[1420]: New session 16 of user core. Jul 14 22:03:23.806638 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 22:03:33.608515 kubelet[2535]: E0714 22:03:33.608470 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:03:34.012774 sshd[6438]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:34.024116 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:52978.service: Deactivated successfully. Jul 14 22:03:34.025896 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:03:34.028712 systemd-logind[1420]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:03:34.029937 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:58664.service - OpenSSH per-connection server daemon (10.0.0.1:58664). Jul 14 22:03:34.031042 systemd-logind[1420]: Removed session 16. Jul 14 22:03:34.077516 sshd[6472]: Accepted publickey for core from 10.0.0.1 port 58664 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:34.078637 sshd[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:34.082115 systemd-logind[1420]: New session 17 of user core. Jul 14 22:03:34.093606 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 22:03:37.609265 kubelet[2535]: E0714 22:03:37.609224 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:03:47.609574 kubelet[2535]: E0714 22:03:47.609401 2535 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:03:55.840261 sshd[6472]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:55.851637 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:58664.service: Deactivated successfully. Jul 14 22:03:55.857187 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:03:55.859117 systemd-logind[1420]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:03:55.872352 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:40558.service - OpenSSH per-connection server daemon (10.0.0.1:40558). Jul 14 22:03:55.874737 systemd-logind[1420]: Removed session 17. Jul 14 22:03:55.920267 sshd[6565]: Accepted publickey for core from 10.0.0.1 port 40558 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:55.921825 sshd[6565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:55.926356 systemd-logind[1420]: New session 18 of user core. Jul 14 22:03:55.931634 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 22:03:56.655862 sshd[6565]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:56.662559 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:40558.service: Deactivated successfully. Jul 14 22:03:56.665346 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:03:56.667209 systemd-logind[1420]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:03:56.668293 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:40562.service - OpenSSH per-connection server daemon (10.0.0.1:40562). Jul 14 22:03:56.671501 systemd-logind[1420]: Removed session 18. Jul 14 22:03:56.712209 sshd[6587]: Accepted publickey for core from 10.0.0.1 port 40562 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:03:56.713652 sshd[6587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:03:56.718947 systemd-logind[1420]: New session 19 of user core. Jul 14 22:03:56.725689 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 22:03:56.852277 sshd[6587]: pam_unix(sshd:session): session closed for user core Jul 14 22:03:56.855286 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:40562.service: Deactivated successfully. Jul 14 22:03:56.857274 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:03:56.858970 systemd-logind[1420]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:03:56.859956 systemd-logind[1420]: Removed session 19. Jul 14 22:04:01.863813 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:40566.service - OpenSSH per-connection server daemon (10.0.0.1:40566). Jul 14 22:04:01.902433 sshd[6625]: Accepted publickey for core from 10.0.0.1 port 40566 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:04:01.903995 sshd[6625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:04:01.907945 systemd-logind[1420]: New session 20 of user core. Jul 14 22:04:01.924664 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 22:04:02.046282 sshd[6625]: pam_unix(sshd:session): session closed for user core Jul 14 22:04:02.051963 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:40566.service: Deactivated successfully. Jul 14 22:04:02.055040 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:04:02.056697 systemd-logind[1420]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:04:02.057498 systemd-logind[1420]: Removed session 20. Jul 14 22:04:07.064809 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:39198.service - OpenSSH per-connection server daemon (10.0.0.1:39198). Jul 14 22:04:07.099177 sshd[6640]: Accepted publickey for core from 10.0.0.1 port 39198 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:04:07.100614 sshd[6640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:04:07.106936 systemd-logind[1420]: New session 21 of user core. Jul 14 22:04:07.118673 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 22:04:07.239611 sshd[6640]: pam_unix(sshd:session): session closed for user core Jul 14 22:04:07.243813 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:39198.service: Deactivated successfully. Jul 14 22:04:07.246212 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:04:07.248384 systemd-logind[1420]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:04:07.249691 systemd-logind[1420]: Removed session 21. Jul 14 22:04:12.254757 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:39214.service - OpenSSH per-connection server daemon (10.0.0.1:39214). Jul 14 22:04:12.291921 sshd[6676]: Accepted publickey for core from 10.0.0.1 port 39214 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 22:04:12.293630 sshd[6676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:04:12.299081 systemd-logind[1420]: New session 22 of user core. Jul 14 22:04:12.310641 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 22:04:12.496132 sshd[6676]: pam_unix(sshd:session): session closed for user core Jul 14 22:04:12.500866 systemd-logind[1420]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:04:12.501276 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:39214.service: Deactivated successfully. Jul 14 22:04:12.505110 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:04:12.507226 systemd-logind[1420]: Removed session 22.