Jul 11 00:03:55.893998 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 00:03:55.894018 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Jul 10 22:41:52 -00 2025 Jul 11 00:03:55.894028 kernel: KASLR enabled Jul 11 00:03:55.894034 kernel: efi: EFI v2.7 by EDK II Jul 11 00:03:55.894040 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 11 00:03:55.894046 kernel: random: crng init done Jul 11 00:03:55.894053 kernel: ACPI: Early table checksum verification disabled Jul 11 00:03:55.894059 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 11 00:03:55.894065 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 00:03:55.894072 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:03:55.894079 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:03:55.894085 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:03:55.894091 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:03:55.894097 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:03:55.894104 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:03:55.894112 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:03:55.894118 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:03:55.894125 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 00:03:55.894131 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 00:03:55.894137 kernel: NUMA: Failed to initialise from firmware Jul 11 00:03:55.894153 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:03:55.894159 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 11 00:03:55.894165 kernel: Zone ranges: Jul 11 00:03:55.894172 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:03:55.894178 kernel: DMA32 empty Jul 11 00:03:55.894186 kernel: Normal empty Jul 11 00:03:55.894193 kernel: Movable zone start for each node Jul 11 00:03:55.894199 kernel: Early memory node ranges Jul 11 00:03:55.894205 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 11 00:03:55.894212 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 11 00:03:55.894218 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 11 00:03:55.894224 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 11 00:03:55.894230 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 11 00:03:55.894237 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 11 00:03:55.894243 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 11 00:03:55.894249 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 00:03:55.894255 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 00:03:55.894263 kernel: psci: probing for conduit method from ACPI. Jul 11 00:03:55.894269 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 00:03:55.894275 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 00:03:55.894285 kernel: psci: Trusted OS migration not required Jul 11 00:03:55.894292 kernel: psci: SMC Calling Convention v1.1 Jul 11 00:03:55.894299 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 00:03:55.894307 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 11 00:03:55.894314 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 11 00:03:55.894321 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 00:03:55.894328 kernel: Detected PIPT I-cache on CPU0 Jul 11 00:03:55.894335 kernel: CPU features: detected: GIC system register CPU interface Jul 11 00:03:55.894341 kernel: CPU features: detected: Hardware dirty bit management Jul 11 00:03:55.894348 kernel: CPU features: detected: Spectre-v4 Jul 11 00:03:55.894355 kernel: CPU features: detected: Spectre-BHB Jul 11 00:03:55.894362 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 00:03:55.894368 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 00:03:55.894376 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 00:03:55.894383 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 00:03:55.894390 kernel: alternatives: applying boot alternatives Jul 11 00:03:55.894398 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:03:55.894405 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 00:03:55.894412 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 00:03:55.894419 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 00:03:55.894425 kernel: Fallback order for Node 0: 0 Jul 11 00:03:55.894432 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 11 00:03:55.894439 kernel: Policy zone: DMA Jul 11 00:03:55.894446 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 00:03:55.894454 kernel: software IO TLB: area num 4. Jul 11 00:03:55.894461 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 11 00:03:55.894468 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 11 00:03:55.894475 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 00:03:55.894482 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 00:03:55.894489 kernel: rcu: RCU event tracing is enabled. Jul 11 00:03:55.894496 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 00:03:55.894503 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 00:03:55.894510 kernel: Tracing variant of Tasks RCU enabled. Jul 11 00:03:55.894517 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 00:03:55.894524 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 00:03:55.894531 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 00:03:55.894539 kernel: GICv3: 256 SPIs implemented Jul 11 00:03:55.894546 kernel: GICv3: 0 Extended SPIs implemented Jul 11 00:03:55.894552 kernel: Root IRQ handler: gic_handle_irq Jul 11 00:03:55.894559 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 11 00:03:55.894566 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 00:03:55.894573 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 00:03:55.894580 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 11 00:03:55.894587 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 11 00:03:55.894594 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 11 00:03:55.894601 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 11 00:03:55.894608 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 00:03:55.894616 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:03:55.894623 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 00:03:55.894630 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 00:03:55.894637 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 00:03:55.894643 kernel: arm-pv: using stolen time PV Jul 11 00:03:55.894650 kernel: Console: colour dummy device 80x25 Jul 11 00:03:55.894658 kernel: ACPI: Core revision 20230628 Jul 11 00:03:55.894665 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 00:03:55.894672 kernel: pid_max: default: 32768 minimum: 301 Jul 11 00:03:55.894679 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 11 00:03:55.894687 kernel: landlock: Up and running. Jul 11 00:03:55.894694 kernel: SELinux: Initializing. Jul 11 00:03:55.894701 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:03:55.894708 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 00:03:55.894715 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:03:55.894722 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 00:03:55.894729 kernel: rcu: Hierarchical SRCU implementation. Jul 11 00:03:55.894736 kernel: rcu: Max phase no-delay instances is 400. Jul 11 00:03:55.894743 kernel: Platform MSI: ITS@0x8080000 domain created Jul 11 00:03:55.894752 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 11 00:03:55.894759 kernel: Remapping and enabling EFI services. Jul 11 00:03:55.894766 kernel: smp: Bringing up secondary CPUs ... Jul 11 00:03:55.894773 kernel: Detected PIPT I-cache on CPU1 Jul 11 00:03:55.894780 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 00:03:55.894787 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 11 00:03:55.894794 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:03:55.894810 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 00:03:55.894819 kernel: Detected PIPT I-cache on CPU2 Jul 11 00:03:55.894826 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 00:03:55.894835 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 11 00:03:55.894842 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:03:55.894863 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 00:03:55.894872 kernel: Detected PIPT I-cache on CPU3 Jul 11 00:03:55.894879 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 00:03:55.894886 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 11 00:03:55.894894 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 00:03:55.894901 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 00:03:55.894908 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 00:03:55.894917 kernel: SMP: Total of 4 processors activated. Jul 11 00:03:55.894924 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 00:03:55.894932 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 00:03:55.894939 kernel: CPU features: detected: Common not Private translations Jul 11 00:03:55.894947 kernel: CPU features: detected: CRC32 instructions Jul 11 00:03:55.894954 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 11 00:03:55.894962 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 00:03:55.894969 kernel: CPU features: detected: LSE atomic instructions Jul 11 00:03:55.894978 kernel: CPU features: detected: Privileged Access Never Jul 11 00:03:55.894985 kernel: CPU features: detected: RAS Extension Support Jul 11 00:03:55.894992 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 00:03:55.894999 kernel: CPU: All CPU(s) started at EL1 Jul 11 00:03:55.895007 kernel: alternatives: applying system-wide alternatives Jul 11 00:03:55.895014 kernel: devtmpfs: initialized Jul 11 00:03:55.895022 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 00:03:55.895030 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 00:03:55.895037 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 00:03:55.895046 kernel: SMBIOS 3.0.0 present. Jul 11 00:03:55.895053 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 11 00:03:55.895060 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 00:03:55.895068 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 00:03:55.895075 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 00:03:55.895082 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 00:03:55.895090 kernel: audit: initializing netlink subsys (disabled) Jul 11 00:03:55.895097 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 11 00:03:55.895105 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 00:03:55.895113 kernel: cpuidle: using governor menu Jul 11 00:03:55.895120 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 00:03:55.895128 kernel: ASID allocator initialised with 32768 entries Jul 11 00:03:55.895135 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 00:03:55.895147 kernel: Serial: AMBA PL011 UART driver Jul 11 00:03:55.895154 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 11 00:03:55.895162 kernel: Modules: 0 pages in range for non-PLT usage Jul 11 00:03:55.895169 kernel: Modules: 509008 pages in range for PLT usage Jul 11 00:03:55.895176 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 00:03:55.895185 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 00:03:55.895193 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 00:03:55.895200 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 11 00:03:55.895207 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 00:03:55.895215 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 00:03:55.895222 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 00:03:55.895230 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 11 00:03:55.895237 kernel: ACPI: Added _OSI(Module Device) Jul 11 00:03:55.895244 kernel: ACPI: Added _OSI(Processor Device) Jul 11 00:03:55.895253 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 00:03:55.895260 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 00:03:55.895267 kernel: ACPI: Interpreter enabled Jul 11 00:03:55.895274 kernel: ACPI: Using GIC for interrupt routing Jul 11 00:03:55.895281 kernel: ACPI: MCFG table detected, 1 entries Jul 11 00:03:55.895288 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 00:03:55.895295 kernel: printk: console [ttyAMA0] enabled Jul 11 00:03:55.895303 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 00:03:55.895433 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 00:03:55.895509 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 00:03:55.895572 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 00:03:55.895634 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 00:03:55.895702 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 00:03:55.895711 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 00:03:55.895719 kernel: PCI host bridge to bus 0000:00 Jul 11 00:03:55.895787 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 00:03:55.895878 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 00:03:55.895943 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 00:03:55.896017 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 00:03:55.896105 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 11 00:03:55.896194 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 11 00:03:55.896264 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 11 00:03:55.896334 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 11 00:03:55.896401 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:03:55.896468 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 00:03:55.896551 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 11 00:03:55.896618 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 11 00:03:55.896677 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 00:03:55.896735 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 00:03:55.896793 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 00:03:55.896803 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 00:03:55.896810 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 00:03:55.896818 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 00:03:55.896825 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 00:03:55.896832 kernel: iommu: Default domain type: Translated Jul 11 00:03:55.896839 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 00:03:55.896925 kernel: efivars: Registered efivars operations Jul 11 00:03:55.896934 kernel: vgaarb: loaded Jul 11 00:03:55.896945 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 00:03:55.896952 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 00:03:55.896960 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 00:03:55.896968 kernel: pnp: PnP ACPI init Jul 11 00:03:55.897048 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 00:03:55.897059 kernel: pnp: PnP ACPI: found 1 devices Jul 11 00:03:55.897066 kernel: NET: Registered PF_INET protocol family Jul 11 00:03:55.897073 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 00:03:55.897084 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 00:03:55.897091 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 00:03:55.897099 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 00:03:55.897106 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 00:03:55.897113 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 00:03:55.897120 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:03:55.897128 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 00:03:55.897135 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 00:03:55.897150 kernel: PCI: CLS 0 bytes, default 64 Jul 11 00:03:55.897159 kernel: kvm [1]: HYP mode not available Jul 11 00:03:55.897167 kernel: Initialise system trusted keyrings Jul 11 00:03:55.897174 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 00:03:55.897181 kernel: Key type asymmetric registered Jul 11 00:03:55.897188 kernel: Asymmetric key parser 'x509' registered Jul 11 00:03:55.897195 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 11 00:03:55.897203 kernel: io scheduler mq-deadline registered Jul 11 00:03:55.897210 kernel: io scheduler kyber registered Jul 11 00:03:55.897217 kernel: io scheduler bfq registered Jul 11 00:03:55.897226 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 00:03:55.897234 kernel: ACPI: button: Power Button [PWRB] Jul 11 00:03:55.897241 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 00:03:55.897312 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 00:03:55.897322 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 00:03:55.897330 kernel: thunder_xcv, ver 1.0 Jul 11 00:03:55.897341 kernel: thunder_bgx, ver 1.0 Jul 11 00:03:55.897348 kernel: nicpf, ver 1.0 Jul 11 00:03:55.897355 kernel: nicvf, ver 1.0 Jul 11 00:03:55.897428 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 00:03:55.897491 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T00:03:55 UTC (1752192235) Jul 11 00:03:55.897501 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 00:03:55.897508 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 11 00:03:55.897516 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 11 00:03:55.897523 kernel: watchdog: Hard watchdog permanently disabled Jul 11 00:03:55.897530 kernel: NET: Registered PF_INET6 protocol family Jul 11 00:03:55.897537 kernel: Segment Routing with IPv6 Jul 11 00:03:55.897548 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 00:03:55.897555 kernel: NET: Registered PF_PACKET protocol family Jul 11 00:03:55.897562 kernel: Key type dns_resolver registered Jul 11 00:03:55.897570 kernel: registered taskstats version 1 Jul 11 00:03:55.897577 kernel: Loading compiled-in X.509 certificates Jul 11 00:03:55.897584 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 9d58afa0c1753353480d5539f26f662c9ce000cb' Jul 11 00:03:55.897591 kernel: Key type .fscrypt registered Jul 11 00:03:55.897599 kernel: Key type fscrypt-provisioning registered Jul 11 00:03:55.897606 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 00:03:55.897615 kernel: ima: Allocated hash algorithm: sha1 Jul 11 00:03:55.897622 kernel: ima: No architecture policies found Jul 11 00:03:55.897629 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 00:03:55.897636 kernel: clk: Disabling unused clocks Jul 11 00:03:55.897644 kernel: Freeing unused kernel memory: 39424K Jul 11 00:03:55.897651 kernel: Run /init as init process Jul 11 00:03:55.897658 kernel: with arguments: Jul 11 00:03:55.897666 kernel: /init Jul 11 00:03:55.897673 kernel: with environment: Jul 11 00:03:55.897681 kernel: HOME=/ Jul 11 00:03:55.897688 kernel: TERM=linux Jul 11 00:03:55.897695 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 00:03:55.897705 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:03:55.897714 systemd[1]: Detected virtualization kvm. Jul 11 00:03:55.897722 systemd[1]: Detected architecture arm64. Jul 11 00:03:55.897729 systemd[1]: Running in initrd. Jul 11 00:03:55.897737 systemd[1]: No hostname configured, using default hostname. Jul 11 00:03:55.897746 systemd[1]: Hostname set to . Jul 11 00:03:55.897754 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:03:55.897762 systemd[1]: Queued start job for default target initrd.target. Jul 11 00:03:55.897770 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:03:55.897778 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:03:55.897786 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 00:03:55.897794 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:03:55.897801 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 00:03:55.897811 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 00:03:55.897820 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 00:03:55.897828 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 00:03:55.897835 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:03:55.897843 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:03:55.897862 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:03:55.897870 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:03:55.897893 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:03:55.897902 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:03:55.897909 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:03:55.897917 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:03:55.897925 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 00:03:55.897933 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 11 00:03:55.897953 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:03:55.897961 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:03:55.897971 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:03:55.897979 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:03:55.897986 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 00:03:55.897994 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:03:55.898002 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 00:03:55.898010 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 00:03:55.898018 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:03:55.898025 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:03:55.898033 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:03:55.898042 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 00:03:55.898051 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:03:55.898058 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 00:03:55.898067 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 00:03:55.898076 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:03:55.898102 systemd-journald[237]: Collecting audit messages is disabled. Jul 11 00:03:55.898121 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:03:55.898129 systemd-journald[237]: Journal started Jul 11 00:03:55.898156 systemd-journald[237]: Runtime Journal (/run/log/journal/6cee5797cd8c42379c79e6a578604b26) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:03:55.889279 systemd-modules-load[238]: Inserted module 'overlay' Jul 11 00:03:55.899824 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:03:55.900962 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 00:03:55.904551 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 00:03:55.904570 kernel: Bridge firewalling registered Jul 11 00:03:55.904473 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 11 00:03:55.905439 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:03:55.907092 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:03:55.908221 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:03:55.912705 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:03:55.919588 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:03:55.920929 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:03:55.922569 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:03:55.941073 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 00:03:55.942021 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:03:55.946530 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:03:55.951071 dracut-cmdline[275]: dracut-dracut-053 Jul 11 00:03:55.953678 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=1479f76954ab5eb3c0ce800eb2a80ad04b273ff773a5af5c1fe82fb8feef2990 Jul 11 00:03:55.980831 systemd-resolved[282]: Positive Trust Anchors: Jul 11 00:03:55.980887 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:03:55.980921 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:03:55.985764 systemd-resolved[282]: Defaulting to hostname 'linux'. Jul 11 00:03:55.987401 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:03:55.988328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:03:56.025871 kernel: SCSI subsystem initialized Jul 11 00:03:56.031869 kernel: Loading iSCSI transport class v2.0-870. Jul 11 00:03:56.039876 kernel: iscsi: registered transport (tcp) Jul 11 00:03:56.052873 kernel: iscsi: registered transport (qla4xxx) Jul 11 00:03:56.052895 kernel: QLogic iSCSI HBA Driver Jul 11 00:03:56.102666 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 00:03:56.119999 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 00:03:56.136488 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 00:03:56.136553 kernel: device-mapper: uevent: version 1.0.3 Jul 11 00:03:56.136566 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 11 00:03:56.184871 kernel: raid6: neonx8 gen() 14261 MB/s Jul 11 00:03:56.201872 kernel: raid6: neonx4 gen() 15187 MB/s Jul 11 00:03:56.218873 kernel: raid6: neonx2 gen() 9587 MB/s Jul 11 00:03:56.235868 kernel: raid6: neonx1 gen() 5392 MB/s Jul 11 00:03:56.252887 kernel: raid6: int64x8 gen() 6952 MB/s Jul 11 00:03:56.269874 kernel: raid6: int64x4 gen() 7344 MB/s Jul 11 00:03:56.286869 kernel: raid6: int64x2 gen() 6133 MB/s Jul 11 00:03:56.303865 kernel: raid6: int64x1 gen() 5058 MB/s Jul 11 00:03:56.303882 kernel: raid6: using algorithm neonx4 gen() 15187 MB/s Jul 11 00:03:56.320875 kernel: raid6: .... xor() 12441 MB/s, rmw enabled Jul 11 00:03:56.320895 kernel: raid6: using neon recovery algorithm Jul 11 00:03:56.325862 kernel: xor: measuring software checksum speed Jul 11 00:03:56.325878 kernel: 8regs : 19702 MB/sec Jul 11 00:03:56.327302 kernel: 32regs : 18090 MB/sec Jul 11 00:03:56.327323 kernel: arm64_neon : 26919 MB/sec Jul 11 00:03:56.327333 kernel: xor: using function: arm64_neon (26919 MB/sec) Jul 11 00:03:56.378255 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 00:03:56.390649 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:03:56.402036 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:03:56.413738 systemd-udevd[461]: Using default interface naming scheme 'v255'. Jul 11 00:03:56.416895 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:03:56.419036 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 00:03:56.435502 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Jul 11 00:03:56.463578 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:03:56.479147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:03:56.518547 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:03:56.529155 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 00:03:56.540494 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 00:03:56.541771 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:03:56.542756 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:03:56.546009 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:03:56.555033 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 00:03:56.565888 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:03:56.580299 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 11 00:03:56.580510 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 00:03:56.581153 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:03:56.581278 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:03:56.588127 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 00:03:56.588155 kernel: GPT:9289727 != 19775487 Jul 11 00:03:56.588166 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 00:03:56.588175 kernel: GPT:9289727 != 19775487 Jul 11 00:03:56.588184 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 00:03:56.588193 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:03:56.588123 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:03:56.589115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:03:56.589262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:03:56.591066 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:03:56.598183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:03:56.607510 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (510) Jul 11 00:03:56.608486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 00:03:56.612151 kernel: BTRFS: device fsid f5d5cad7-cb7a-4b07-bec7-847b84711ad7 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (504) Jul 11 00:03:56.613988 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:03:56.619531 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 00:03:56.626761 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:03:56.630604 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 00:03:56.631856 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 00:03:56.647024 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 00:03:56.648955 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 00:03:56.654638 disk-uuid[549]: Primary Header is updated. Jul 11 00:03:56.654638 disk-uuid[549]: Secondary Entries is updated. Jul 11 00:03:56.654638 disk-uuid[549]: Secondary Header is updated. Jul 11 00:03:56.657257 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:03:56.673693 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:03:57.674883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 00:03:57.675670 disk-uuid[550]: The operation has completed successfully. Jul 11 00:03:57.698978 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 00:03:57.699100 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 00:03:57.721089 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 00:03:57.723760 sh[569]: Success Jul 11 00:03:57.738874 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 11 00:03:57.768015 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 00:03:57.779095 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 00:03:57.781180 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 00:03:57.790533 kernel: BTRFS info (device dm-0): first mount of filesystem f5d5cad7-cb7a-4b07-bec7-847b84711ad7 Jul 11 00:03:57.790572 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:03:57.790583 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 11 00:03:57.790594 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 11 00:03:57.790955 kernel: BTRFS info (device dm-0): using free space tree Jul 11 00:03:57.794667 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 00:03:57.796002 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 00:03:57.808989 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 00:03:57.810248 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 00:03:57.816589 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:03:57.816627 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:03:57.816637 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:03:57.818895 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:03:57.825869 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 11 00:03:57.827989 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:03:57.833507 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 00:03:57.838158 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 00:03:57.908023 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:03:57.920040 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:03:57.944898 ignition[658]: Ignition 2.19.0 Jul 11 00:03:57.944912 ignition[658]: Stage: fetch-offline Jul 11 00:03:57.944948 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:03:57.944956 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:03:57.945099 ignition[658]: parsed url from cmdline: "" Jul 11 00:03:57.945102 ignition[658]: no config URL provided Jul 11 00:03:57.945106 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 00:03:57.945113 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jul 11 00:03:57.950015 systemd-networkd[761]: lo: Link UP Jul 11 00:03:57.945142 ignition[658]: op(1): [started] loading QEMU firmware config module Jul 11 00:03:57.950019 systemd-networkd[761]: lo: Gained carrier Jul 11 00:03:57.945147 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 00:03:57.950764 systemd-networkd[761]: Enumeration completed Jul 11 00:03:57.951475 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:03:57.952525 systemd[1]: Reached target network.target - Network. Jul 11 00:03:57.953635 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:03:57.957472 ignition[658]: op(1): [finished] loading QEMU firmware config module Jul 11 00:03:57.953638 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:03:57.956543 systemd-networkd[761]: eth0: Link UP Jul 11 00:03:57.956546 systemd-networkd[761]: eth0: Gained carrier Jul 11 00:03:57.956553 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:03:57.975889 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:03:58.004869 ignition[658]: parsing config with SHA512: d360723169f3e06d1f2a3d4a157ed013498aa111ba90ff9d33088fb2c6c4e1fdb19c5da4b01f74bd0dec2dfebc638dc9f96eee204a7961998cb73280c154fcbb Jul 11 00:03:58.010692 unknown[658]: fetched base config from "system" Jul 11 00:03:58.010703 unknown[658]: fetched user config from "qemu" Jul 11 00:03:58.011106 ignition[658]: fetch-offline: fetch-offline passed Jul 11 00:03:58.011177 ignition[658]: Ignition finished successfully Jul 11 00:03:58.013384 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:03:58.014827 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 00:03:58.028022 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 00:03:58.041484 ignition[767]: Ignition 2.19.0 Jul 11 00:03:58.041494 ignition[767]: Stage: kargs Jul 11 00:03:58.041657 ignition[767]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:03:58.041666 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:03:58.042508 ignition[767]: kargs: kargs passed Jul 11 00:03:58.042553 ignition[767]: Ignition finished successfully Jul 11 00:03:58.044447 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 00:03:58.059050 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 00:03:58.069206 ignition[776]: Ignition 2.19.0 Jul 11 00:03:58.069216 ignition[776]: Stage: disks Jul 11 00:03:58.069380 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 11 00:03:58.069390 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:03:58.071471 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 00:03:58.070241 ignition[776]: disks: disks passed Jul 11 00:03:58.072547 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 00:03:58.070284 ignition[776]: Ignition finished successfully Jul 11 00:03:58.073653 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 00:03:58.074917 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:03:58.076264 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:03:58.077513 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:03:58.094023 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 00:03:58.106545 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 11 00:03:58.107651 systemd-resolved[282]: Detected conflict on linux IN A 10.0.0.27 Jul 11 00:03:58.107661 systemd-resolved[282]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Jul 11 00:03:58.112505 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 00:03:58.124996 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 00:03:58.170885 kernel: EXT4-fs (vda9): mounted filesystem a2a437d1-0a8e-46b9-88bf-4a47ff29fe90 r/w with ordered data mode. Quota mode: none. Jul 11 00:03:58.171372 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 00:03:58.172486 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 00:03:58.184951 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:03:58.186518 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 00:03:58.187620 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 00:03:58.187693 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 00:03:58.187719 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:03:58.194209 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (795) Jul 11 00:03:58.194068 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 00:03:58.195582 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 00:03:58.199522 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:03:58.199545 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:03:58.199555 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:03:58.200865 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:03:58.202099 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:03:58.238756 initrd-setup-root[819]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 00:03:58.243079 initrd-setup-root[826]: cut: /sysroot/etc/group: No such file or directory Jul 11 00:03:58.247656 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 00:03:58.251354 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 00:03:58.333717 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 00:03:58.344989 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 00:03:58.346560 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 00:03:58.352863 kernel: BTRFS info (device vda6): last unmount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:03:58.370463 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 00:03:58.380570 ignition[908]: INFO : Ignition 2.19.0 Jul 11 00:03:58.380570 ignition[908]: INFO : Stage: mount Jul 11 00:03:58.381954 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:03:58.381954 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:03:58.381954 ignition[908]: INFO : mount: mount passed Jul 11 00:03:58.381954 ignition[908]: INFO : Ignition finished successfully Jul 11 00:03:58.385068 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 00:03:58.396951 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 00:03:58.789198 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 00:03:58.798037 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 00:03:58.804421 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (921) Jul 11 00:03:58.804464 kernel: BTRFS info (device vda6): first mount of filesystem 183e1727-cabf-4be9-ba6e-b2af88e10184 Jul 11 00:03:58.804475 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 00:03:58.805115 kernel: BTRFS info (device vda6): using free space tree Jul 11 00:03:58.807857 kernel: BTRFS info (device vda6): auto enabling async discard Jul 11 00:03:58.808706 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 00:03:58.824835 ignition[938]: INFO : Ignition 2.19.0 Jul 11 00:03:58.824835 ignition[938]: INFO : Stage: files Jul 11 00:03:58.826047 ignition[938]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:03:58.826047 ignition[938]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:03:58.826047 ignition[938]: DEBUG : files: compiled without relabeling support, skipping Jul 11 00:03:58.829336 ignition[938]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 00:03:58.829336 ignition[938]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 00:03:58.832054 ignition[938]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 00:03:58.833418 ignition[938]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 00:03:58.833418 ignition[938]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 00:03:58.832594 unknown[938]: wrote ssh authorized keys file for user: core Jul 11 00:03:58.836993 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 00:03:58.836993 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 11 00:03:58.930827 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 00:03:59.129057 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:03:59.130947 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 11 00:03:59.136252 systemd-networkd[761]: eth0: Gained IPv6LL Jul 11 00:03:59.651322 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 00:04:00.085402 ignition[938]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 11 00:04:00.087420 ignition[938]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 00:04:00.087420 ignition[938]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:04:00.087420 ignition[938]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 00:04:00.087420 ignition[938]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 00:04:00.087420 ignition[938]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 11 00:04:00.087420 ignition[938]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:04:00.087420 ignition[938]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 00:04:00.087420 ignition[938]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 11 00:04:00.087420 ignition[938]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 00:04:00.109561 ignition[938]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:04:00.113381 ignition[938]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 00:04:00.114669 ignition[938]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 00:04:00.114669 ignition[938]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 11 00:04:00.114669 ignition[938]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 00:04:00.114669 ignition[938]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:04:00.114669 ignition[938]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 00:04:00.114669 ignition[938]: INFO : files: files passed Jul 11 00:04:00.114669 ignition[938]: INFO : Ignition finished successfully Jul 11 00:04:00.117147 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 00:04:00.128066 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 00:04:00.131064 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 00:04:00.136867 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 00:04:00.136975 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 00:04:00.140361 initrd-setup-root-after-ignition[967]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 00:04:00.143622 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:04:00.143622 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:04:00.146331 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 00:04:00.147429 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:04:00.148642 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 00:04:00.158053 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 00:04:00.185431 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 00:04:00.185544 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 00:04:00.187602 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 00:04:00.189204 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 00:04:00.190598 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 00:04:00.191483 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 00:04:00.209406 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:04:00.222097 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 00:04:00.230047 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:04:00.230983 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:04:00.232445 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 00:04:00.233804 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 00:04:00.233952 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 00:04:00.235768 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 00:04:00.237251 systemd[1]: Stopped target basic.target - Basic System. Jul 11 00:04:00.238542 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 00:04:00.239742 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 00:04:00.241122 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 00:04:00.242572 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 00:04:00.243909 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 00:04:00.245366 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 00:04:00.247067 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 00:04:00.248278 systemd[1]: Stopped target swap.target - Swaps. Jul 11 00:04:00.249560 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 00:04:00.249690 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 00:04:00.251452 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:04:00.252869 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:04:00.254254 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 00:04:00.254973 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:04:00.256767 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 00:04:00.256917 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 00:04:00.258948 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 00:04:00.259063 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 00:04:00.260503 systemd[1]: Stopped target paths.target - Path Units. Jul 11 00:04:00.261606 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 00:04:00.265926 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:04:00.266953 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 00:04:00.268570 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 00:04:00.269701 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 00:04:00.269800 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 00:04:00.270916 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 00:04:00.271008 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 00:04:00.272356 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 00:04:00.272466 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 00:04:00.273797 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 00:04:00.273905 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 00:04:00.282017 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 00:04:00.282665 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 00:04:00.282781 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:04:00.284971 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 00:04:00.286211 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 00:04:00.286333 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:04:00.287756 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 00:04:00.287911 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 00:04:00.293736 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 00:04:00.294574 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 00:04:00.296436 ignition[994]: INFO : Ignition 2.19.0 Jul 11 00:04:00.297511 ignition[994]: INFO : Stage: umount Jul 11 00:04:00.297511 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 00:04:00.297511 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 00:04:00.299944 ignition[994]: INFO : umount: umount passed Jul 11 00:04:00.299944 ignition[994]: INFO : Ignition finished successfully Jul 11 00:04:00.299743 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 00:04:00.301596 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 00:04:00.301713 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 00:04:00.302870 systemd[1]: Stopped target network.target - Network. Jul 11 00:04:00.303942 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 00:04:00.303997 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 00:04:00.305238 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 00:04:00.305281 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 00:04:00.306484 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 00:04:00.306524 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 00:04:00.311420 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 00:04:00.311472 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 00:04:00.312843 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 00:04:00.314269 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 00:04:00.315622 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 00:04:00.315724 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 00:04:00.317360 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 00:04:00.317401 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 00:04:00.323276 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 00:04:00.323403 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 00:04:00.325529 systemd-networkd[761]: eth0: DHCPv6 lease lost Jul 11 00:04:00.325584 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 00:04:00.325642 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:04:00.327363 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 00:04:00.327468 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 00:04:00.329088 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 00:04:00.329156 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:04:00.338961 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 00:04:00.340952 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 00:04:00.341055 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 00:04:00.343265 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 00:04:00.343430 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:04:00.344982 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 00:04:00.345034 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 00:04:00.347032 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:04:00.360525 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 00:04:00.360654 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 00:04:00.369338 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 00:04:00.369470 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:04:00.371668 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 00:04:00.371730 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 00:04:00.373999 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 00:04:00.374034 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:04:00.375765 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 00:04:00.375818 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 00:04:00.378660 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 00:04:00.378710 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 00:04:00.381361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 00:04:00.381415 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 00:04:00.401263 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 00:04:00.402349 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 00:04:00.402418 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:04:00.404570 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 00:04:00.404782 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:04:00.407019 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 00:04:00.407119 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 00:04:00.409920 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 00:04:00.413182 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 00:04:00.423574 systemd[1]: Switching root. Jul 11 00:04:00.458417 systemd-journald[237]: Journal stopped Jul 11 00:04:01.221147 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 11 00:04:01.221208 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 00:04:01.221221 kernel: SELinux: policy capability open_perms=1 Jul 11 00:04:01.221231 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 00:04:01.221241 kernel: SELinux: policy capability always_check_network=0 Jul 11 00:04:01.221251 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 00:04:01.221266 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 00:04:01.221282 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 00:04:01.221291 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 00:04:01.221307 kernel: audit: type=1403 audit(1752192240.632:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 00:04:01.221319 systemd[1]: Successfully loaded SELinux policy in 30.831ms. Jul 11 00:04:01.221335 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.871ms. Jul 11 00:04:01.221347 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 11 00:04:01.221360 systemd[1]: Detected virtualization kvm. Jul 11 00:04:01.221370 systemd[1]: Detected architecture arm64. Jul 11 00:04:01.221386 systemd[1]: Detected first boot. Jul 11 00:04:01.221400 systemd[1]: Initializing machine ID from VM UUID. Jul 11 00:04:01.221410 zram_generator::config[1038]: No configuration found. Jul 11 00:04:01.221421 systemd[1]: Populated /etc with preset unit settings. Jul 11 00:04:01.221431 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 00:04:01.221441 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 00:04:01.221451 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 00:04:01.221461 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 00:04:01.221475 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 00:04:01.221486 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 00:04:01.221496 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 00:04:01.221507 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 00:04:01.221517 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 00:04:01.221527 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 00:04:01.221537 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 00:04:01.221547 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 00:04:01.221560 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 00:04:01.221574 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 00:04:01.221585 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 00:04:01.221597 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 00:04:01.221608 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 00:04:01.221619 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 11 00:04:01.221629 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 00:04:01.221639 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 00:04:01.221649 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 00:04:01.221659 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 00:04:01.221671 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 00:04:01.221682 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 00:04:01.221692 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 00:04:01.221702 systemd[1]: Reached target slices.target - Slice Units. Jul 11 00:04:01.221712 systemd[1]: Reached target swap.target - Swaps. Jul 11 00:04:01.221723 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 00:04:01.221734 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 00:04:01.221751 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 00:04:01.221765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 00:04:01.221775 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 00:04:01.221786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 00:04:01.221796 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 00:04:01.221806 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 00:04:01.221820 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 00:04:01.221836 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 00:04:01.221950 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 00:04:01.221967 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 00:04:01.221982 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 00:04:01.221995 systemd[1]: Reached target machines.target - Containers. Jul 11 00:04:01.222007 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 00:04:01.222018 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:04:01.222029 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 00:04:01.222039 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 00:04:01.222050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:04:01.222060 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:04:01.222076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:04:01.222088 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 00:04:01.222099 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:04:01.222109 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 00:04:01.222119 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 00:04:01.222138 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 00:04:01.222152 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 00:04:01.222165 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 00:04:01.222177 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 00:04:01.222189 kernel: ACPI: bus type drm_connector registered Jul 11 00:04:01.222201 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 00:04:01.222211 kernel: fuse: init (API version 7.39) Jul 11 00:04:01.222221 kernel: loop: module loaded Jul 11 00:04:01.222233 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 00:04:01.222245 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 00:04:01.222255 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 00:04:01.222266 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 00:04:01.222276 systemd[1]: Stopped verity-setup.service. Jul 11 00:04:01.222310 systemd-journald[1098]: Collecting audit messages is disabled. Jul 11 00:04:01.222336 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 00:04:01.222350 systemd-journald[1098]: Journal started Jul 11 00:04:01.222371 systemd-journald[1098]: Runtime Journal (/run/log/journal/6cee5797cd8c42379c79e6a578604b26) is 5.9M, max 47.3M, 41.4M free. Jul 11 00:04:01.027430 systemd[1]: Queued start job for default target multi-user.target. Jul 11 00:04:01.047965 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 00:04:01.048382 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 00:04:01.223935 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 00:04:01.225516 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 00:04:01.226166 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 00:04:01.226981 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 00:04:01.227896 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 00:04:01.228831 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 00:04:01.231861 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 00:04:01.233019 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 00:04:01.233174 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 00:04:01.234360 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:04:01.234520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:04:01.235597 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:04:01.235736 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:04:01.236885 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:04:01.237012 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:04:01.239224 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 00:04:01.239360 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 00:04:01.240368 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:04:01.240483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:04:01.241715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 00:04:01.243188 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 00:04:01.244650 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 00:04:01.255992 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 00:04:01.267986 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 00:04:01.269936 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 00:04:01.270759 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 00:04:01.270795 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 00:04:01.272603 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 11 00:04:01.274739 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 00:04:01.276759 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 00:04:01.277669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:04:01.282057 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 00:04:01.284355 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 00:04:01.285406 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:04:01.289050 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 00:04:01.289931 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:04:01.290961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 00:04:01.294795 systemd-journald[1098]: Time spent on flushing to /var/log/journal/6cee5797cd8c42379c79e6a578604b26 is 15.593ms for 852 entries. Jul 11 00:04:01.294795 systemd-journald[1098]: System Journal (/var/log/journal/6cee5797cd8c42379c79e6a578604b26) is 8.0M, max 195.6M, 187.6M free. Jul 11 00:04:01.315918 systemd-journald[1098]: Received client request to flush runtime journal. Jul 11 00:04:01.295033 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 00:04:01.298716 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 00:04:01.301297 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 00:04:01.302505 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 00:04:01.304934 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 00:04:01.306258 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 00:04:01.307603 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 00:04:01.318224 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 00:04:01.321322 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 00:04:01.326579 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 00:04:01.330100 kernel: loop0: detected capacity change from 0 to 114432 Jul 11 00:04:01.333569 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 11 00:04:01.339064 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 00:04:01.342089 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 11 00:04:01.348065 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 00:04:01.353833 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 00:04:01.357884 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 11 00:04:01.362533 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 11 00:04:01.363959 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 00:04:01.368022 kernel: loop1: detected capacity change from 0 to 114328 Jul 11 00:04:01.374110 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 00:04:01.395231 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jul 11 00:04:01.395252 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jul 11 00:04:01.399736 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 00:04:01.402894 kernel: loop2: detected capacity change from 0 to 203944 Jul 11 00:04:01.442878 kernel: loop3: detected capacity change from 0 to 114432 Jul 11 00:04:01.448972 kernel: loop4: detected capacity change from 0 to 114328 Jul 11 00:04:01.454094 kernel: loop5: detected capacity change from 0 to 203944 Jul 11 00:04:01.457610 (sd-merge)[1173]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 00:04:01.458025 (sd-merge)[1173]: Merged extensions into '/usr'. Jul 11 00:04:01.461617 systemd[1]: Reloading requested from client PID 1148 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 00:04:01.461635 systemd[1]: Reloading... Jul 11 00:04:01.525880 zram_generator::config[1205]: No configuration found. Jul 11 00:04:01.590252 ldconfig[1143]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 00:04:01.618590 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:04:01.654676 systemd[1]: Reloading finished in 192 ms. Jul 11 00:04:01.686880 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 00:04:01.688220 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 00:04:01.708071 systemd[1]: Starting ensure-sysext.service... Jul 11 00:04:01.709914 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 00:04:01.725808 systemd[1]: Reloading requested from client PID 1233 ('systemctl') (unit ensure-sysext.service)... Jul 11 00:04:01.725824 systemd[1]: Reloading... Jul 11 00:04:01.732488 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 00:04:01.732758 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 00:04:01.733404 systemd-tmpfiles[1234]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 00:04:01.733621 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jul 11 00:04:01.733673 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Jul 11 00:04:01.735809 systemd-tmpfiles[1234]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:04:01.735823 systemd-tmpfiles[1234]: Skipping /boot Jul 11 00:04:01.742697 systemd-tmpfiles[1234]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 00:04:01.742714 systemd-tmpfiles[1234]: Skipping /boot Jul 11 00:04:01.778889 zram_generator::config[1264]: No configuration found. Jul 11 00:04:01.857487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:04:01.893575 systemd[1]: Reloading finished in 167 ms. Jul 11 00:04:01.910075 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 00:04:01.919297 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 00:04:01.927412 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:04:01.930159 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 00:04:01.932864 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 00:04:01.938186 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 00:04:01.948237 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 00:04:01.953203 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 00:04:01.956433 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:04:01.959313 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:04:01.963991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:04:01.966280 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:04:01.967323 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:04:01.968149 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 00:04:01.978274 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 00:04:01.983040 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 00:04:01.984661 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 00:04:01.986669 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:04:01.987925 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:04:01.989444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:04:01.989570 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:04:01.993340 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:04:01.993485 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:04:01.995005 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 00:04:02.005049 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:04:02.020401 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:04:02.021782 systemd-udevd[1303]: Using default interface naming scheme 'v255'. Jul 11 00:04:02.022287 augenrules[1330]: No rules Jul 11 00:04:02.025244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:04:02.031194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:04:02.034031 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:04:02.034273 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:04:02.035414 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 00:04:02.037068 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:04:02.038549 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:04:02.038685 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:04:02.040325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:04:02.042009 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:04:02.043662 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:04:02.043788 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:04:02.045109 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 00:04:02.051897 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 00:04:02.061937 systemd[1]: Finished ensure-sysext.service. Jul 11 00:04:02.066472 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 00:04:02.072082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 00:04:02.075140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 00:04:02.077666 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 00:04:02.081026 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 00:04:02.082169 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 00:04:02.087065 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 00:04:02.088070 systemd-resolved[1302]: Positive Trust Anchors: Jul 11 00:04:02.088357 systemd-resolved[1302]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 00:04:02.088438 systemd-resolved[1302]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 00:04:02.092049 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 00:04:02.095938 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 00:04:02.100832 systemd-resolved[1302]: Defaulting to hostname 'linux'. Jul 11 00:04:02.108015 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 00:04:02.109465 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 00:04:02.111212 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 00:04:02.111893 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1353) Jul 11 00:04:02.124090 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 00:04:02.125953 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 00:04:02.136605 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 00:04:02.141818 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 00:04:02.146205 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 11 00:04:02.147625 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 00:04:02.154280 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 00:04:02.154483 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 00:04:02.158870 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 00:04:02.164334 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 00:04:02.164512 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 00:04:02.174042 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 00:04:02.199996 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 00:04:02.219473 systemd-networkd[1371]: lo: Link UP Jul 11 00:04:02.219486 systemd-networkd[1371]: lo: Gained carrier Jul 11 00:04:02.220818 systemd-networkd[1371]: Enumeration completed Jul 11 00:04:02.223241 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 00:04:02.224184 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 00:04:02.226491 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:04:02.226503 systemd-networkd[1371]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 00:04:02.227029 systemd[1]: Reached target network.target - Network. Jul 11 00:04:02.228309 systemd-networkd[1371]: eth0: Link UP Jul 11 00:04:02.228314 systemd-networkd[1371]: eth0: Gained carrier Jul 11 00:04:02.228330 systemd-networkd[1371]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 00:04:02.230150 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 00:04:02.231272 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 00:04:02.232908 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 00:04:02.237919 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 11 00:04:02.240914 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 11 00:04:02.242929 systemd-networkd[1371]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 00:04:02.243722 systemd-timesyncd[1372]: Network configuration changed, trying to establish connection. Jul 11 00:04:02.245647 systemd-timesyncd[1372]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 00:04:02.245710 systemd-timesyncd[1372]: Initial clock synchronization to Fri 2025-07-11 00:04:01.915247 UTC. Jul 11 00:04:02.275296 lvm[1394]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:04:02.280226 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 00:04:02.308430 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 11 00:04:02.309641 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 00:04:02.310539 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 00:04:02.311483 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 00:04:02.312482 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 00:04:02.313655 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 00:04:02.314689 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 00:04:02.315645 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 00:04:02.316578 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 00:04:02.316614 systemd[1]: Reached target paths.target - Path Units. Jul 11 00:04:02.317302 systemd[1]: Reached target timers.target - Timer Units. Jul 11 00:04:02.318962 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 00:04:02.321145 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 00:04:02.332970 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 00:04:02.335403 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 11 00:04:02.337028 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 00:04:02.338177 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 00:04:02.339116 systemd[1]: Reached target basic.target - Basic System. Jul 11 00:04:02.340106 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:04:02.340149 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 00:04:02.341159 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 00:04:02.343158 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 00:04:02.346002 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 11 00:04:02.347081 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 00:04:02.350695 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 00:04:02.353262 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 00:04:02.355071 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 00:04:02.355174 jq[1405]: false Jul 11 00:04:02.359091 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 00:04:02.361220 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 00:04:02.364944 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 00:04:02.367238 extend-filesystems[1406]: Found loop3 Jul 11 00:04:02.368896 extend-filesystems[1406]: Found loop4 Jul 11 00:04:02.368896 extend-filesystems[1406]: Found loop5 Jul 11 00:04:02.368896 extend-filesystems[1406]: Found vda Jul 11 00:04:02.368896 extend-filesystems[1406]: Found vda1 Jul 11 00:04:02.368896 extend-filesystems[1406]: Found vda2 Jul 11 00:04:02.368896 extend-filesystems[1406]: Found vda3 Jul 11 00:04:02.368896 extend-filesystems[1406]: Found usr Jul 11 00:04:02.368896 extend-filesystems[1406]: Found vda4 Jul 11 00:04:02.368896 extend-filesystems[1406]: Found vda6 Jul 11 00:04:02.368896 extend-filesystems[1406]: Found vda7 Jul 11 00:04:02.368896 extend-filesystems[1406]: Found vda9 Jul 11 00:04:02.368896 extend-filesystems[1406]: Checking size of /dev/vda9 Jul 11 00:04:02.403261 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 00:04:02.403291 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1348) Jul 11 00:04:02.385840 dbus-daemon[1404]: [system] SELinux support is enabled Jul 11 00:04:02.371410 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 00:04:02.408223 extend-filesystems[1406]: Resized partition /dev/vda9 Jul 11 00:04:02.376415 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 00:04:02.409441 extend-filesystems[1424]: resize2fs 1.47.1 (20-May-2024) Jul 11 00:04:02.377021 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 00:04:02.382223 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 00:04:02.388069 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 00:04:02.395233 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 00:04:02.408772 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 11 00:04:02.415506 jq[1425]: true Jul 11 00:04:02.413680 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 00:04:02.416115 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 00:04:02.416576 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 00:04:02.416748 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 00:04:02.420596 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 00:04:02.420951 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 00:04:02.425864 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 00:04:02.433948 (ntainerd)[1432]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 00:04:02.442457 systemd-logind[1416]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 00:04:02.444399 extend-filesystems[1424]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 00:04:02.444399 extend-filesystems[1424]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 00:04:02.444399 extend-filesystems[1424]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 00:04:02.444330 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 00:04:02.453077 extend-filesystems[1406]: Resized filesystem in /dev/vda9 Jul 11 00:04:02.444570 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 00:04:02.444729 systemd-logind[1416]: New seat seat0. Jul 11 00:04:02.459922 update_engine[1421]: I20250711 00:04:02.459632 1421 main.cc:92] Flatcar Update Engine starting Jul 11 00:04:02.463007 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 00:04:02.468365 jq[1431]: true Jul 11 00:04:02.468806 tar[1430]: linux-arm64/helm Jul 11 00:04:02.469094 update_engine[1421]: I20250711 00:04:02.469033 1421 update_check_scheduler.cc:74] Next update check in 5m26s Jul 11 00:04:02.476387 systemd[1]: Started update-engine.service - Update Engine. Jul 11 00:04:02.478961 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 00:04:02.479098 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 00:04:02.481013 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 00:04:02.481140 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 00:04:02.485164 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 00:04:02.555827 bash[1460]: Updated "/home/core/.ssh/authorized_keys" Jul 11 00:04:02.564545 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 00:04:02.566348 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 00:04:02.572635 locksmithd[1447]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 00:04:02.670058 containerd[1432]: time="2025-07-11T00:04:02.669892320Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 11 00:04:02.695752 containerd[1432]: time="2025-07-11T00:04:02.695569640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:04:02.697397 containerd[1432]: time="2025-07-11T00:04:02.697357120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:04:02.697537 containerd[1432]: time="2025-07-11T00:04:02.697519320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 11 00:04:02.697596 containerd[1432]: time="2025-07-11T00:04:02.697583680Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 11 00:04:02.697953 containerd[1432]: time="2025-07-11T00:04:02.697931600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 11 00:04:02.698037 containerd[1432]: time="2025-07-11T00:04:02.698023040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 11 00:04:02.698220 containerd[1432]: time="2025-07-11T00:04:02.698198880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:04:02.698333 containerd[1432]: time="2025-07-11T00:04:02.698317800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:04:02.698669 containerd[1432]: time="2025-07-11T00:04:02.698646120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:04:02.698901 containerd[1432]: time="2025-07-11T00:04:02.698791880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 11 00:04:02.698901 containerd[1432]: time="2025-07-11T00:04:02.698816440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:04:02.699978 containerd[1432]: time="2025-07-11T00:04:02.699154560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 11 00:04:02.699978 containerd[1432]: time="2025-07-11T00:04:02.699279120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:04:02.699978 containerd[1432]: time="2025-07-11T00:04:02.699548080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 11 00:04:02.699978 containerd[1432]: time="2025-07-11T00:04:02.699654560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 11 00:04:02.699978 containerd[1432]: time="2025-07-11T00:04:02.699668400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 11 00:04:02.699978 containerd[1432]: time="2025-07-11T00:04:02.699742880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 11 00:04:02.699978 containerd[1432]: time="2025-07-11T00:04:02.699783040Z" level=info msg="metadata content store policy set" policy=shared Jul 11 00:04:02.703689 containerd[1432]: time="2025-07-11T00:04:02.703641960Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 11 00:04:02.703925 containerd[1432]: time="2025-07-11T00:04:02.703906160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 11 00:04:02.704062 containerd[1432]: time="2025-07-11T00:04:02.704048280Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 11 00:04:02.704154 containerd[1432]: time="2025-07-11T00:04:02.704139240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 11 00:04:02.704244 containerd[1432]: time="2025-07-11T00:04:02.704230360Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 11 00:04:02.704670 containerd[1432]: time="2025-07-11T00:04:02.704647600Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 11 00:04:02.704993 containerd[1432]: time="2025-07-11T00:04:02.704974640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 11 00:04:02.705432 containerd[1432]: time="2025-07-11T00:04:02.705413480Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 11 00:04:02.705614 containerd[1432]: time="2025-07-11T00:04:02.705597840Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 11 00:04:02.705742 containerd[1432]: time="2025-07-11T00:04:02.705726400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 11 00:04:02.705976 containerd[1432]: time="2025-07-11T00:04:02.705873000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 11 00:04:02.706152 containerd[1432]: time="2025-07-11T00:04:02.706083320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 11 00:04:02.706275 containerd[1432]: time="2025-07-11T00:04:02.706204800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 11 00:04:02.706390 containerd[1432]: time="2025-07-11T00:04:02.706374720Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 11 00:04:02.706506 containerd[1432]: time="2025-07-11T00:04:02.706492040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 11 00:04:02.706622 containerd[1432]: time="2025-07-11T00:04:02.706607400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 11 00:04:02.706767 containerd[1432]: time="2025-07-11T00:04:02.706750640Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 11 00:04:02.706942 containerd[1432]: time="2025-07-11T00:04:02.706879160Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 11 00:04:02.706942 containerd[1432]: time="2025-07-11T00:04:02.706909960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.706942 containerd[1432]: time="2025-07-11T00:04:02.706924800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707291 containerd[1432]: time="2025-07-11T00:04:02.707148000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707291 containerd[1432]: time="2025-07-11T00:04:02.707173520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707291 containerd[1432]: time="2025-07-11T00:04:02.707186800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707291 containerd[1432]: time="2025-07-11T00:04:02.707217040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707291 containerd[1432]: time="2025-07-11T00:04:02.707231400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707291 containerd[1432]: time="2025-07-11T00:04:02.707244080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707291 containerd[1432]: time="2025-07-11T00:04:02.707257480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707829 containerd[1432]: time="2025-07-11T00:04:02.707283280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707829 containerd[1432]: time="2025-07-11T00:04:02.707705360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707829 containerd[1432]: time="2025-07-11T00:04:02.707727200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707829 containerd[1432]: time="2025-07-11T00:04:02.707758480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.707829 containerd[1432]: time="2025-07-11T00:04:02.707779120Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 11 00:04:02.707829 containerd[1432]: time="2025-07-11T00:04:02.707803040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.708093 containerd[1432]: time="2025-07-11T00:04:02.707815600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.708093 containerd[1432]: time="2025-07-11T00:04:02.708003520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 11 00:04:02.708213 containerd[1432]: time="2025-07-11T00:04:02.708198640Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 11 00:04:02.708459 containerd[1432]: time="2025-07-11T00:04:02.708397120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 11 00:04:02.708459 containerd[1432]: time="2025-07-11T00:04:02.708416560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 11 00:04:02.708459 containerd[1432]: time="2025-07-11T00:04:02.708429440Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 11 00:04:02.708459 containerd[1432]: time="2025-07-11T00:04:02.708439120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.708638 containerd[1432]: time="2025-07-11T00:04:02.708562360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 11 00:04:02.708638 containerd[1432]: time="2025-07-11T00:04:02.708580520Z" level=info msg="NRI interface is disabled by configuration." Jul 11 00:04:02.708638 containerd[1432]: time="2025-07-11T00:04:02.708594160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 11 00:04:02.709173 containerd[1432]: time="2025-07-11T00:04:02.709066880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 11 00:04:02.709382 containerd[1432]: time="2025-07-11T00:04:02.709153440Z" level=info msg="Connect containerd service" Jul 11 00:04:02.709382 containerd[1432]: time="2025-07-11T00:04:02.709351920Z" level=info msg="using legacy CRI server" Jul 11 00:04:02.709382 containerd[1432]: time="2025-07-11T00:04:02.709361400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 00:04:02.709718 containerd[1432]: time="2025-07-11T00:04:02.709573440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 11 00:04:02.711919 containerd[1432]: time="2025-07-11T00:04:02.711891880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:04:02.712224 containerd[1432]: time="2025-07-11T00:04:02.712170000Z" level=info msg="Start subscribing containerd event" Jul 11 00:04:02.712260 containerd[1432]: time="2025-07-11T00:04:02.712226800Z" level=info msg="Start recovering state" Jul 11 00:04:02.712382 containerd[1432]: time="2025-07-11T00:04:02.712289360Z" level=info msg="Start event monitor" Jul 11 00:04:02.712382 containerd[1432]: time="2025-07-11T00:04:02.712307920Z" level=info msg="Start snapshots syncer" Jul 11 00:04:02.712382 containerd[1432]: time="2025-07-11T00:04:02.712317280Z" level=info msg="Start cni network conf syncer for default" Jul 11 00:04:02.712382 containerd[1432]: time="2025-07-11T00:04:02.712325160Z" level=info msg="Start streaming server" Jul 11 00:04:02.712779 containerd[1432]: time="2025-07-11T00:04:02.712762880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 00:04:02.712828 containerd[1432]: time="2025-07-11T00:04:02.712809880Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 00:04:02.714684 containerd[1432]: time="2025-07-11T00:04:02.712870840Z" level=info msg="containerd successfully booted in 0.046253s" Jul 11 00:04:02.712965 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 00:04:02.799141 tar[1430]: linux-arm64/LICENSE Jul 11 00:04:02.799240 tar[1430]: linux-arm64/README.md Jul 11 00:04:02.814975 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 00:04:03.385134 sshd_keygen[1427]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 00:04:03.405802 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 00:04:03.421211 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 00:04:03.427875 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 00:04:03.428257 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 00:04:03.432684 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 00:04:03.446170 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 00:04:03.449152 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 00:04:03.451325 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 11 00:04:03.452428 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 00:04:03.487963 systemd-networkd[1371]: eth0: Gained IPv6LL Jul 11 00:04:03.490491 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 00:04:03.492270 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 00:04:03.506133 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 00:04:03.508481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:04:03.510737 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 00:04:03.524924 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 00:04:03.525115 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 00:04:03.526426 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 00:04:03.536537 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 00:04:04.066558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:04:04.067837 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 00:04:04.069585 systemd[1]: Startup finished in 537ms (kernel) + 4.909s (initrd) + 3.486s (userspace) = 8.933s. Jul 11 00:04:04.070709 (kubelet)[1517]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:04:04.526066 kubelet[1517]: E0711 00:04:04.526020 1517 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:04:04.528708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:04:04.528873 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:04:08.127743 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 00:04:08.128913 systemd[1]: Started sshd@0-10.0.0.27:22-10.0.0.1:51900.service - OpenSSH per-connection server daemon (10.0.0.1:51900). Jul 11 00:04:08.185314 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 51900 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:04:08.187625 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:04:08.198684 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 00:04:08.208108 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 00:04:08.209889 systemd-logind[1416]: New session 1 of user core. Jul 11 00:04:08.218891 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 00:04:08.221218 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 00:04:08.228589 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 00:04:08.316841 systemd[1534]: Queued start job for default target default.target. Jul 11 00:04:08.323858 systemd[1534]: Created slice app.slice - User Application Slice. Jul 11 00:04:08.323886 systemd[1534]: Reached target paths.target - Paths. Jul 11 00:04:08.323898 systemd[1534]: Reached target timers.target - Timers. Jul 11 00:04:08.325184 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 00:04:08.335213 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 00:04:08.335291 systemd[1534]: Reached target sockets.target - Sockets. Jul 11 00:04:08.335304 systemd[1534]: Reached target basic.target - Basic System. Jul 11 00:04:08.335346 systemd[1534]: Reached target default.target - Main User Target. Jul 11 00:04:08.335373 systemd[1534]: Startup finished in 101ms. Jul 11 00:04:08.335647 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 00:04:08.337142 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 00:04:08.394375 systemd[1]: Started sshd@1-10.0.0.27:22-10.0.0.1:51916.service - OpenSSH per-connection server daemon (10.0.0.1:51916). Jul 11 00:04:08.427748 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 51916 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:04:08.429030 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:04:08.432927 systemd-logind[1416]: New session 2 of user core. Jul 11 00:04:08.445068 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 00:04:08.495620 sshd[1545]: pam_unix(sshd:session): session closed for user core Jul 11 00:04:08.504544 systemd[1]: sshd@1-10.0.0.27:22-10.0.0.1:51916.service: Deactivated successfully. Jul 11 00:04:08.506098 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 00:04:08.507909 systemd-logind[1416]: Session 2 logged out. Waiting for processes to exit. Jul 11 00:04:08.508707 systemd[1]: Started sshd@2-10.0.0.27:22-10.0.0.1:51920.service - OpenSSH per-connection server daemon (10.0.0.1:51920). Jul 11 00:04:08.509886 systemd-logind[1416]: Removed session 2. Jul 11 00:04:08.542310 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 51920 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:04:08.543526 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:04:08.547141 systemd-logind[1416]: New session 3 of user core. Jul 11 00:04:08.564029 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 00:04:08.611344 sshd[1552]: pam_unix(sshd:session): session closed for user core Jul 11 00:04:08.624421 systemd[1]: sshd@2-10.0.0.27:22-10.0.0.1:51920.service: Deactivated successfully. Jul 11 00:04:08.625839 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 00:04:08.627521 systemd-logind[1416]: Session 3 logged out. Waiting for processes to exit. Jul 11 00:04:08.642198 systemd[1]: Started sshd@3-10.0.0.27:22-10.0.0.1:51924.service - OpenSSH per-connection server daemon (10.0.0.1:51924). Jul 11 00:04:08.642947 systemd-logind[1416]: Removed session 3. Jul 11 00:04:08.679392 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 51924 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:04:08.680598 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:04:08.685184 systemd-logind[1416]: New session 4 of user core. Jul 11 00:04:08.697008 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 00:04:08.747993 sshd[1559]: pam_unix(sshd:session): session closed for user core Jul 11 00:04:08.762789 systemd[1]: sshd@3-10.0.0.27:22-10.0.0.1:51924.service: Deactivated successfully. Jul 11 00:04:08.765391 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 00:04:08.768078 systemd-logind[1416]: Session 4 logged out. Waiting for processes to exit. Jul 11 00:04:08.777158 systemd[1]: Started sshd@4-10.0.0.27:22-10.0.0.1:51940.service - OpenSSH per-connection server daemon (10.0.0.1:51940). Jul 11 00:04:08.778092 systemd-logind[1416]: Removed session 4. Jul 11 00:04:08.807526 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 51940 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:04:08.808783 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:04:08.812788 systemd-logind[1416]: New session 5 of user core. Jul 11 00:04:08.820072 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 00:04:08.882561 sudo[1569]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 00:04:08.882872 sudo[1569]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:04:08.898647 sudo[1569]: pam_unix(sudo:session): session closed for user root Jul 11 00:04:08.900702 sshd[1566]: pam_unix(sshd:session): session closed for user core Jul 11 00:04:08.909285 systemd[1]: sshd@4-10.0.0.27:22-10.0.0.1:51940.service: Deactivated successfully. Jul 11 00:04:08.912234 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 00:04:08.913521 systemd-logind[1416]: Session 5 logged out. Waiting for processes to exit. Jul 11 00:04:08.914770 systemd[1]: Started sshd@5-10.0.0.27:22-10.0.0.1:51950.service - OpenSSH per-connection server daemon (10.0.0.1:51950). Jul 11 00:04:08.915522 systemd-logind[1416]: Removed session 5. Jul 11 00:04:08.954610 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 51950 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:04:08.955661 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:04:08.959675 systemd-logind[1416]: New session 6 of user core. Jul 11 00:04:08.970036 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 00:04:09.021300 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 00:04:09.021580 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:04:09.024766 sudo[1578]: pam_unix(sudo:session): session closed for user root Jul 11 00:04:09.029904 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 11 00:04:09.030188 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:04:09.062208 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 11 00:04:09.063598 auditctl[1581]: No rules Jul 11 00:04:09.064470 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 00:04:09.064728 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 11 00:04:09.066607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 11 00:04:09.089954 augenrules[1599]: No rules Jul 11 00:04:09.091936 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 11 00:04:09.093540 sudo[1577]: pam_unix(sudo:session): session closed for user root Jul 11 00:04:09.095174 sshd[1574]: pam_unix(sshd:session): session closed for user core Jul 11 00:04:09.109327 systemd[1]: sshd@5-10.0.0.27:22-10.0.0.1:51950.service: Deactivated successfully. Jul 11 00:04:09.110972 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 00:04:09.113750 systemd-logind[1416]: Session 6 logged out. Waiting for processes to exit. Jul 11 00:04:09.115016 systemd[1]: Started sshd@6-10.0.0.27:22-10.0.0.1:51964.service - OpenSSH per-connection server daemon (10.0.0.1:51964). Jul 11 00:04:09.117226 systemd-logind[1416]: Removed session 6. Jul 11 00:04:09.151148 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 51964 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:04:09.152492 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:04:09.157379 systemd-logind[1416]: New session 7 of user core. Jul 11 00:04:09.168070 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 00:04:09.220163 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 00:04:09.220431 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 00:04:09.529213 (dockerd)[1629]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 00:04:09.529734 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 00:04:09.785220 dockerd[1629]: time="2025-07-11T00:04:09.785089756Z" level=info msg="Starting up" Jul 11 00:04:09.934807 dockerd[1629]: time="2025-07-11T00:04:09.934759218Z" level=info msg="Loading containers: start." Jul 11 00:04:10.042869 kernel: Initializing XFRM netlink socket Jul 11 00:04:10.119651 systemd-networkd[1371]: docker0: Link UP Jul 11 00:04:10.137319 dockerd[1629]: time="2025-07-11T00:04:10.137270485Z" level=info msg="Loading containers: done." Jul 11 00:04:10.151320 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3915584997-merged.mount: Deactivated successfully. Jul 11 00:04:10.153900 dockerd[1629]: time="2025-07-11T00:04:10.153772751Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 00:04:10.154005 dockerd[1629]: time="2025-07-11T00:04:10.153906507Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 11 00:04:10.154030 dockerd[1629]: time="2025-07-11T00:04:10.154022121Z" level=info msg="Daemon has completed initialization" Jul 11 00:04:10.181748 dockerd[1629]: time="2025-07-11T00:04:10.181579823Z" level=info msg="API listen on /run/docker.sock" Jul 11 00:04:10.181854 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 00:04:10.773316 containerd[1432]: time="2025-07-11T00:04:10.773248777Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 11 00:04:11.656457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount845418928.mount: Deactivated successfully. Jul 11 00:04:12.767724 containerd[1432]: time="2025-07-11T00:04:12.767672864Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:12.768527 containerd[1432]: time="2025-07-11T00:04:12.768279795Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 11 00:04:12.769323 containerd[1432]: time="2025-07-11T00:04:12.769266038Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:12.772281 containerd[1432]: time="2025-07-11T00:04:12.772243058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:12.773568 containerd[1432]: time="2025-07-11T00:04:12.773529764Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 2.000231247s" Jul 11 00:04:12.773568 containerd[1432]: time="2025-07-11T00:04:12.773568754Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 11 00:04:12.776551 containerd[1432]: time="2025-07-11T00:04:12.776513934Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 11 00:04:14.014333 containerd[1432]: time="2025-07-11T00:04:14.014270552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:14.015008 containerd[1432]: time="2025-07-11T00:04:14.014983827Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 11 00:04:14.015663 containerd[1432]: time="2025-07-11T00:04:14.015637673Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:14.018673 containerd[1432]: time="2025-07-11T00:04:14.018504842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:14.022190 containerd[1432]: time="2025-07-11T00:04:14.022098044Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.245538363s" Jul 11 00:04:14.022190 containerd[1432]: time="2025-07-11T00:04:14.022142061Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 11 00:04:14.022844 containerd[1432]: time="2025-07-11T00:04:14.022565756Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 11 00:04:14.582929 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 00:04:14.599016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:04:14.699970 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:04:14.703643 (kubelet)[1845]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:04:14.741496 kubelet[1845]: E0711 00:04:14.740616 1845 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:04:14.743546 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:04:14.743686 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:04:15.308540 containerd[1432]: time="2025-07-11T00:04:15.308464215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:15.308966 containerd[1432]: time="2025-07-11T00:04:15.308891076Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 11 00:04:15.313710 containerd[1432]: time="2025-07-11T00:04:15.313648143Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:15.317042 containerd[1432]: time="2025-07-11T00:04:15.316986786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:15.317798 containerd[1432]: time="2025-07-11T00:04:15.317707264Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.295104846s" Jul 11 00:04:15.317798 containerd[1432]: time="2025-07-11T00:04:15.317747724Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 11 00:04:15.318275 containerd[1432]: time="2025-07-11T00:04:15.318249159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 11 00:04:16.338147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2567866788.mount: Deactivated successfully. Jul 11 00:04:16.613000 containerd[1432]: time="2025-07-11T00:04:16.612862430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:16.613950 containerd[1432]: time="2025-07-11T00:04:16.613912295Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 11 00:04:16.614963 containerd[1432]: time="2025-07-11T00:04:16.614928327Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:16.617229 containerd[1432]: time="2025-07-11T00:04:16.617194675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:16.617918 containerd[1432]: time="2025-07-11T00:04:16.617788840Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.299504287s" Jul 11 00:04:16.617918 containerd[1432]: time="2025-07-11T00:04:16.617819296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 11 00:04:16.618674 containerd[1432]: time="2025-07-11T00:04:16.618637142Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 00:04:17.296980 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount46445290.mount: Deactivated successfully. Jul 11 00:04:18.236839 containerd[1432]: time="2025-07-11T00:04:18.236774099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:18.238004 containerd[1432]: time="2025-07-11T00:04:18.237962681Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 11 00:04:18.239555 containerd[1432]: time="2025-07-11T00:04:18.239100307Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:18.242325 containerd[1432]: time="2025-07-11T00:04:18.242281883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:18.243733 containerd[1432]: time="2025-07-11T00:04:18.243688323Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.625017212s" Jul 11 00:04:18.243787 containerd[1432]: time="2025-07-11T00:04:18.243731521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 11 00:04:18.244230 containerd[1432]: time="2025-07-11T00:04:18.244193692Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 00:04:18.787725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount383965785.mount: Deactivated successfully. Jul 11 00:04:18.791291 containerd[1432]: time="2025-07-11T00:04:18.791243696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:18.792297 containerd[1432]: time="2025-07-11T00:04:18.792263661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 11 00:04:18.793243 containerd[1432]: time="2025-07-11T00:04:18.793187883Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:18.795389 containerd[1432]: time="2025-07-11T00:04:18.795337560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:18.796458 containerd[1432]: time="2025-07-11T00:04:18.796421647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 552.197604ms" Jul 11 00:04:18.796505 containerd[1432]: time="2025-07-11T00:04:18.796457884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 00:04:18.797088 containerd[1432]: time="2025-07-11T00:04:18.797049529Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 11 00:04:19.308553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount401611044.mount: Deactivated successfully. Jul 11 00:04:20.990605 containerd[1432]: time="2025-07-11T00:04:20.990532178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:20.990980 containerd[1432]: time="2025-07-11T00:04:20.990947200Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 11 00:04:20.991941 containerd[1432]: time="2025-07-11T00:04:20.991884544Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:20.995126 containerd[1432]: time="2025-07-11T00:04:20.995080056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:20.997399 containerd[1432]: time="2025-07-11T00:04:20.997368453Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.200285421s" Jul 11 00:04:20.997399 containerd[1432]: time="2025-07-11T00:04:20.997402467Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 11 00:04:24.832915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 00:04:24.842071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:04:24.986408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:04:24.991057 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 00:04:25.026384 kubelet[2008]: E0711 00:04:25.026308 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 00:04:25.029093 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 00:04:25.029246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 00:04:25.417814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:04:25.433118 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:04:25.455009 systemd[1]: Reloading requested from client PID 2023 ('systemctl') (unit session-7.scope)... Jul 11 00:04:25.455026 systemd[1]: Reloading... Jul 11 00:04:25.523011 zram_generator::config[2062]: No configuration found. Jul 11 00:04:25.656023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:04:25.710558 systemd[1]: Reloading finished in 255 ms. Jul 11 00:04:25.753143 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:04:25.756896 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:04:25.757133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:04:25.759687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:04:25.903571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:04:25.909405 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:04:25.950458 kubelet[2109]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:04:25.950458 kubelet[2109]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:04:25.950458 kubelet[2109]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:04:25.950458 kubelet[2109]: I0711 00:04:25.950404 2109 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:04:26.519881 kubelet[2109]: I0711 00:04:26.519567 2109 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:04:26.519881 kubelet[2109]: I0711 00:04:26.519601 2109 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:04:26.519881 kubelet[2109]: I0711 00:04:26.519887 2109 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:04:26.555932 kubelet[2109]: E0711 00:04:26.555896 2109 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:04:26.558136 kubelet[2109]: I0711 00:04:26.558109 2109 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:04:26.568669 kubelet[2109]: E0711 00:04:26.568620 2109 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:04:26.568669 kubelet[2109]: I0711 00:04:26.568660 2109 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:04:26.574161 kubelet[2109]: I0711 00:04:26.574125 2109 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:04:26.576706 kubelet[2109]: I0711 00:04:26.576675 2109 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:04:26.582127 kubelet[2109]: I0711 00:04:26.582076 2109 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:04:26.582303 kubelet[2109]: I0711 00:04:26.582110 2109 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:04:26.582397 kubelet[2109]: I0711 00:04:26.582312 2109 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:04:26.582397 kubelet[2109]: I0711 00:04:26.582322 2109 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:04:26.582709 kubelet[2109]: I0711 00:04:26.582684 2109 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:04:26.586425 kubelet[2109]: I0711 00:04:26.586388 2109 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:04:26.586425 kubelet[2109]: I0711 00:04:26.586426 2109 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:04:26.586484 kubelet[2109]: I0711 00:04:26.586452 2109 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:04:26.586484 kubelet[2109]: I0711 00:04:26.586468 2109 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:04:26.590751 kubelet[2109]: W0711 00:04:26.590618 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Jul 11 00:04:26.590751 kubelet[2109]: E0711 00:04:26.590695 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:04:26.590751 kubelet[2109]: W0711 00:04:26.590689 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Jul 11 00:04:26.590751 kubelet[2109]: E0711 00:04:26.590740 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:04:26.590924 kubelet[2109]: I0711 00:04:26.590865 2109 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:04:26.591815 kubelet[2109]: I0711 00:04:26.591790 2109 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:04:26.592019 kubelet[2109]: W0711 00:04:26.591997 2109 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 00:04:26.592972 kubelet[2109]: I0711 00:04:26.592946 2109 server.go:1274] "Started kubelet" Jul 11 00:04:26.593627 kubelet[2109]: I0711 00:04:26.593108 2109 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:04:26.596277 kubelet[2109]: I0711 00:04:26.593713 2109 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:04:26.596277 kubelet[2109]: I0711 00:04:26.594635 2109 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:04:26.596277 kubelet[2109]: I0711 00:04:26.594886 2109 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:04:26.596277 kubelet[2109]: I0711 00:04:26.595206 2109 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:04:26.596277 kubelet[2109]: I0711 00:04:26.596136 2109 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:04:26.598456 kubelet[2109]: I0711 00:04:26.597104 2109 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:04:26.598456 kubelet[2109]: I0711 00:04:26.597222 2109 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:04:26.598456 kubelet[2109]: I0711 00:04:26.597266 2109 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:04:26.598456 kubelet[2109]: W0711 00:04:26.597708 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Jul 11 00:04:26.598456 kubelet[2109]: E0711 00:04:26.597754 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:04:26.598456 kubelet[2109]: E0711 00:04:26.597961 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:04:26.598456 kubelet[2109]: E0711 00:04:26.598185 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="200ms" Jul 11 00:04:26.599765 kubelet[2109]: E0711 00:04:26.599718 2109 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 00:04:26.604243 kubelet[2109]: I0711 00:04:26.601394 2109 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:04:26.604243 kubelet[2109]: I0711 00:04:26.601410 2109 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:04:26.604243 kubelet[2109]: I0711 00:04:26.601495 2109 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:04:26.604479 kubelet[2109]: E0711 00:04:26.600185 2109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.27:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.27:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185109953c7516a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 00:04:26.592917156 +0000 UTC m=+0.680319446,LastTimestamp:2025-07-11 00:04:26.592917156 +0000 UTC m=+0.680319446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 00:04:26.610585 kubelet[2109]: I0711 00:04:26.610516 2109 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:04:26.612072 kubelet[2109]: I0711 00:04:26.612037 2109 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:04:26.612072 kubelet[2109]: I0711 00:04:26.612074 2109 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:04:26.612192 kubelet[2109]: I0711 00:04:26.612094 2109 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:04:26.612192 kubelet[2109]: E0711 00:04:26.612141 2109 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:04:26.612550 kubelet[2109]: W0711 00:04:26.612487 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Jul 11 00:04:26.612736 kubelet[2109]: I0711 00:04:26.612712 2109 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:04:26.612736 kubelet[2109]: I0711 00:04:26.612730 2109 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:04:26.612788 kubelet[2109]: I0711 00:04:26.612748 2109 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:04:26.613166 kubelet[2109]: E0711 00:04:26.613048 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:04:26.615051 kubelet[2109]: I0711 00:04:26.615023 2109 policy_none.go:49] "None policy: Start" Jul 11 00:04:26.615588 kubelet[2109]: I0711 00:04:26.615573 2109 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:04:26.615655 kubelet[2109]: I0711 00:04:26.615594 2109 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:04:26.620885 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 00:04:26.631652 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 00:04:26.634341 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 00:04:26.644707 kubelet[2109]: I0711 00:04:26.644581 2109 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:04:26.644813 kubelet[2109]: I0711 00:04:26.644784 2109 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:04:26.644841 kubelet[2109]: I0711 00:04:26.644798 2109 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:04:26.645028 kubelet[2109]: I0711 00:04:26.645010 2109 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:04:26.646550 kubelet[2109]: E0711 00:04:26.646523 2109 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 00:04:26.720120 systemd[1]: Created slice kubepods-burstable-pod1b1ed83b8429578aa086d05ef80c51fe.slice - libcontainer container kubepods-burstable-pod1b1ed83b8429578aa086d05ef80c51fe.slice. Jul 11 00:04:26.742501 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 11 00:04:26.746066 kubelet[2109]: I0711 00:04:26.746040 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:04:26.746572 kubelet[2109]: E0711 00:04:26.746535 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Jul 11 00:04:26.758365 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 11 00:04:26.799244 kubelet[2109]: I0711 00:04:26.799122 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b1ed83b8429578aa086d05ef80c51fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b1ed83b8429578aa086d05ef80c51fe\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:04:26.799244 kubelet[2109]: I0711 00:04:26.799164 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b1ed83b8429578aa086d05ef80c51fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b1ed83b8429578aa086d05ef80c51fe\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:04:26.799244 kubelet[2109]: I0711 00:04:26.799184 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b1ed83b8429578aa086d05ef80c51fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1b1ed83b8429578aa086d05ef80c51fe\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:04:26.799244 kubelet[2109]: I0711 00:04:26.799201 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:04:26.799244 kubelet[2109]: I0711 00:04:26.799215 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:04:26.799537 kubelet[2109]: I0711 00:04:26.799244 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:04:26.799537 kubelet[2109]: I0711 00:04:26.799261 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:04:26.799537 kubelet[2109]: I0711 00:04:26.799277 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:04:26.799537 kubelet[2109]: I0711 00:04:26.799324 2109 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:04:26.799769 kubelet[2109]: E0711 00:04:26.799729 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="400ms" Jul 11 00:04:26.948401 kubelet[2109]: I0711 00:04:26.948349 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:04:26.948712 kubelet[2109]: E0711 00:04:26.948677 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Jul 11 00:04:27.040092 kubelet[2109]: E0711 00:04:27.040053 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:27.040897 containerd[1432]: time="2025-07-11T00:04:27.040830012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1b1ed83b8429578aa086d05ef80c51fe,Namespace:kube-system,Attempt:0,}" Jul 11 00:04:27.057202 kubelet[2109]: E0711 00:04:27.057056 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:27.057630 containerd[1432]: time="2025-07-11T00:04:27.057543450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 11 00:04:27.061117 kubelet[2109]: E0711 00:04:27.061079 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:27.061756 containerd[1432]: time="2025-07-11T00:04:27.061474934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 11 00:04:27.200562 kubelet[2109]: E0711 00:04:27.200505 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="800ms" Jul 11 00:04:27.350399 kubelet[2109]: I0711 00:04:27.350362 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:04:27.350746 kubelet[2109]: E0711 00:04:27.350712 2109 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" Jul 11 00:04:27.489768 kubelet[2109]: W0711 00:04:27.489685 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Jul 11 00:04:27.489768 kubelet[2109]: E0711 00:04:27.489760 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:04:27.581259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2181180563.mount: Deactivated successfully. Jul 11 00:04:27.584740 containerd[1432]: time="2025-07-11T00:04:27.584688349Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:04:27.586834 containerd[1432]: time="2025-07-11T00:04:27.586781317Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:04:27.587445 containerd[1432]: time="2025-07-11T00:04:27.587394289Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:04:27.589286 containerd[1432]: time="2025-07-11T00:04:27.589248018Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 11 00:04:27.589355 containerd[1432]: time="2025-07-11T00:04:27.589335272Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:04:27.590313 containerd[1432]: time="2025-07-11T00:04:27.590281724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 11 00:04:27.591191 containerd[1432]: time="2025-07-11T00:04:27.591164643Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:04:27.594003 containerd[1432]: time="2025-07-11T00:04:27.593945018Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.315672ms" Jul 11 00:04:27.595422 containerd[1432]: time="2025-07-11T00:04:27.595372662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 00:04:27.596488 containerd[1432]: time="2025-07-11T00:04:27.596459039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 555.518013ms" Jul 11 00:04:27.600625 containerd[1432]: time="2025-07-11T00:04:27.600525297Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 538.980481ms" Jul 11 00:04:27.746108 containerd[1432]: time="2025-07-11T00:04:27.746000813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:04:27.746108 containerd[1432]: time="2025-07-11T00:04:27.746067901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:04:27.746108 containerd[1432]: time="2025-07-11T00:04:27.746084952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:27.746355 containerd[1432]: time="2025-07-11T00:04:27.746178236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:27.747414 containerd[1432]: time="2025-07-11T00:04:27.747330822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:04:27.747414 containerd[1432]: time="2025-07-11T00:04:27.747383653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:04:27.747414 containerd[1432]: time="2025-07-11T00:04:27.747401264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:27.747512 containerd[1432]: time="2025-07-11T00:04:27.747477615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:27.748596 containerd[1432]: time="2025-07-11T00:04:27.748517790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:04:27.748673 containerd[1432]: time="2025-07-11T00:04:27.748642301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:04:27.748701 containerd[1432]: time="2025-07-11T00:04:27.748674607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:27.748937 containerd[1432]: time="2025-07-11T00:04:27.748900828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:27.767034 systemd[1]: Started cri-containerd-615a97c510deed4594e00cf720734d7b052be271ac413132de8da7694446da02.scope - libcontainer container 615a97c510deed4594e00cf720734d7b052be271ac413132de8da7694446da02. Jul 11 00:04:27.768726 systemd[1]: Started cri-containerd-62ba129e0fa00218d6bf7ecca99037d0564b64d48b6e0d98d44883be639953eb.scope - libcontainer container 62ba129e0fa00218d6bf7ecca99037d0564b64d48b6e0d98d44883be639953eb. Jul 11 00:04:27.769948 systemd[1]: Started cri-containerd-9588cf6adb78bdc2f762a19320738512c5b096c9377f0298d4928dc5f4db6a33.scope - libcontainer container 9588cf6adb78bdc2f762a19320738512c5b096c9377f0298d4928dc5f4db6a33. Jul 11 00:04:27.804577 containerd[1432]: time="2025-07-11T00:04:27.804524860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"62ba129e0fa00218d6bf7ecca99037d0564b64d48b6e0d98d44883be639953eb\"" Jul 11 00:04:27.806142 kubelet[2109]: E0711 00:04:27.806093 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:27.809341 containerd[1432]: time="2025-07-11T00:04:27.809209160Z" level=info msg="CreateContainer within sandbox \"62ba129e0fa00218d6bf7ecca99037d0564b64d48b6e0d98d44883be639953eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 00:04:27.810795 containerd[1432]: time="2025-07-11T00:04:27.810763233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1b1ed83b8429578aa086d05ef80c51fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"9588cf6adb78bdc2f762a19320738512c5b096c9377f0298d4928dc5f4db6a33\"" Jul 11 00:04:27.811428 kubelet[2109]: E0711 00:04:27.811406 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:27.813680 containerd[1432]: time="2025-07-11T00:04:27.813492014Z" level=info msg="CreateContainer within sandbox \"9588cf6adb78bdc2f762a19320738512c5b096c9377f0298d4928dc5f4db6a33\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 00:04:27.815396 containerd[1432]: time="2025-07-11T00:04:27.815360919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"615a97c510deed4594e00cf720734d7b052be271ac413132de8da7694446da02\"" Jul 11 00:04:27.816509 kubelet[2109]: E0711 00:04:27.816484 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:27.818198 containerd[1432]: time="2025-07-11T00:04:27.818157307Z" level=info msg="CreateContainer within sandbox \"615a97c510deed4594e00cf720734d7b052be271ac413132de8da7694446da02\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 00:04:27.826815 containerd[1432]: time="2025-07-11T00:04:27.826725051Z" level=info msg="CreateContainer within sandbox \"62ba129e0fa00218d6bf7ecca99037d0564b64d48b6e0d98d44883be639953eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d25fd630b99922884cfc335072b978abae845690801efc78f7b5422c51a208c0\"" Jul 11 00:04:27.827451 containerd[1432]: time="2025-07-11T00:04:27.827422361Z" level=info msg="StartContainer for \"d25fd630b99922884cfc335072b978abae845690801efc78f7b5422c51a208c0\"" Jul 11 00:04:27.831934 containerd[1432]: time="2025-07-11T00:04:27.831881280Z" level=info msg="CreateContainer within sandbox \"9588cf6adb78bdc2f762a19320738512c5b096c9377f0298d4928dc5f4db6a33\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64616a2b89069c03582563859d086bd865d2b28d3523eb8eca34ef20c1681f74\"" Jul 11 00:04:27.832474 containerd[1432]: time="2025-07-11T00:04:27.832448289Z" level=info msg="StartContainer for \"64616a2b89069c03582563859d086bd865d2b28d3523eb8eca34ef20c1681f74\"" Jul 11 00:04:27.839398 containerd[1432]: time="2025-07-11T00:04:27.839350428Z" level=info msg="CreateContainer within sandbox \"615a97c510deed4594e00cf720734d7b052be271ac413132de8da7694446da02\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3769ff36995159d13e90771e32b9e4d3791af1af243e917abf9cbf63983cb5e4\"" Jul 11 00:04:27.842188 containerd[1432]: time="2025-07-11T00:04:27.842150330Z" level=info msg="StartContainer for \"3769ff36995159d13e90771e32b9e4d3791af1af243e917abf9cbf63983cb5e4\"" Jul 11 00:04:27.855054 systemd[1]: Started cri-containerd-d25fd630b99922884cfc335072b978abae845690801efc78f7b5422c51a208c0.scope - libcontainer container d25fd630b99922884cfc335072b978abae845690801efc78f7b5422c51a208c0. Jul 11 00:04:27.864037 systemd[1]: Started cri-containerd-64616a2b89069c03582563859d086bd865d2b28d3523eb8eca34ef20c1681f74.scope - libcontainer container 64616a2b89069c03582563859d086bd865d2b28d3523eb8eca34ef20c1681f74. Jul 11 00:04:27.874057 systemd[1]: Started cri-containerd-3769ff36995159d13e90771e32b9e4d3791af1af243e917abf9cbf63983cb5e4.scope - libcontainer container 3769ff36995159d13e90771e32b9e4d3791af1af243e917abf9cbf63983cb5e4. Jul 11 00:04:27.903128 kubelet[2109]: W0711 00:04:27.903066 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Jul 11 00:04:27.903128 kubelet[2109]: E0711 00:04:27.903138 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:04:27.909505 kubelet[2109]: W0711 00:04:27.909462 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Jul 11 00:04:27.909961 kubelet[2109]: E0711 00:04:27.909531 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:04:27.971286 containerd[1432]: time="2025-07-11T00:04:27.971241058Z" level=info msg="StartContainer for \"64616a2b89069c03582563859d086bd865d2b28d3523eb8eca34ef20c1681f74\" returns successfully" Jul 11 00:04:27.971540 containerd[1432]: time="2025-07-11T00:04:27.971271447Z" level=info msg="StartContainer for \"d25fd630b99922884cfc335072b978abae845690801efc78f7b5422c51a208c0\" returns successfully" Jul 11 00:04:27.971614 containerd[1432]: time="2025-07-11T00:04:27.971275839Z" level=info msg="StartContainer for \"3769ff36995159d13e90771e32b9e4d3791af1af243e917abf9cbf63983cb5e4\" returns successfully" Jul 11 00:04:28.001905 kubelet[2109]: E0711 00:04:28.001858 2109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="1.6s" Jul 11 00:04:28.054289 kubelet[2109]: W0711 00:04:28.054223 2109 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused Jul 11 00:04:28.054606 kubelet[2109]: E0711 00:04:28.054298 2109 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" Jul 11 00:04:28.152230 kubelet[2109]: I0711 00:04:28.152123 2109 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:04:28.621590 kubelet[2109]: E0711 00:04:28.621558 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:28.625261 kubelet[2109]: E0711 00:04:28.625231 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:28.626594 kubelet[2109]: E0711 00:04:28.626575 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:29.612144 kubelet[2109]: E0711 00:04:29.612088 2109 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 00:04:29.627553 kubelet[2109]: I0711 00:04:29.626433 2109 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:04:29.627553 kubelet[2109]: E0711 00:04:29.626475 2109 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 00:04:29.629216 kubelet[2109]: E0711 00:04:29.629186 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:29.635383 kubelet[2109]: E0711 00:04:29.635350 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:04:29.735777 kubelet[2109]: E0711 00:04:29.735729 2109 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:04:30.016312 kubelet[2109]: E0711 00:04:30.016204 2109 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 00:04:30.016805 kubelet[2109]: E0711 00:04:30.016755 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:30.591333 kubelet[2109]: I0711 00:04:30.591279 2109 apiserver.go:52] "Watching apiserver" Jul 11 00:04:30.597386 kubelet[2109]: I0711 00:04:30.597324 2109 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:04:31.400000 systemd[1]: Reloading requested from client PID 2391 ('systemctl') (unit session-7.scope)... Jul 11 00:04:31.400016 systemd[1]: Reloading... Jul 11 00:04:31.477910 zram_generator::config[2430]: No configuration found. Jul 11 00:04:31.560432 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 00:04:31.627767 systemd[1]: Reloading finished in 227 ms. Jul 11 00:04:31.639086 kubelet[2109]: E0711 00:04:31.638988 2109 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:31.657370 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:04:31.674981 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 00:04:31.675221 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:04:31.675281 systemd[1]: kubelet.service: Consumed 1.072s CPU time, 129.9M memory peak, 0B memory swap peak. Jul 11 00:04:31.681085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 00:04:31.779589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 00:04:31.783909 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 00:04:31.825810 kubelet[2472]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:04:31.825810 kubelet[2472]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 11 00:04:31.825810 kubelet[2472]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 00:04:31.826262 kubelet[2472]: I0711 00:04:31.825965 2472 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 00:04:31.831660 kubelet[2472]: I0711 00:04:31.831626 2472 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 11 00:04:31.831660 kubelet[2472]: I0711 00:04:31.831656 2472 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 00:04:31.831900 kubelet[2472]: I0711 00:04:31.831888 2472 server.go:934] "Client rotation is on, will bootstrap in background" Jul 11 00:04:31.833271 kubelet[2472]: I0711 00:04:31.833242 2472 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 00:04:31.835352 kubelet[2472]: I0711 00:04:31.835313 2472 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 00:04:31.838054 kubelet[2472]: E0711 00:04:31.838023 2472 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 11 00:04:31.838054 kubelet[2472]: I0711 00:04:31.838052 2472 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 11 00:04:31.841074 kubelet[2472]: I0711 00:04:31.841037 2472 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 00:04:31.841156 kubelet[2472]: I0711 00:04:31.841143 2472 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 11 00:04:31.841256 kubelet[2472]: I0711 00:04:31.841231 2472 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 00:04:31.841412 kubelet[2472]: I0711 00:04:31.841258 2472 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 00:04:31.841489 kubelet[2472]: I0711 00:04:31.841419 2472 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 00:04:31.841489 kubelet[2472]: I0711 00:04:31.841427 2472 container_manager_linux.go:300] "Creating device plugin manager" Jul 11 00:04:31.841489 kubelet[2472]: I0711 00:04:31.841459 2472 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:04:31.841558 kubelet[2472]: I0711 00:04:31.841548 2472 kubelet.go:408] "Attempting to sync node with API server" Jul 11 00:04:31.841582 kubelet[2472]: I0711 00:04:31.841560 2472 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 00:04:31.841582 kubelet[2472]: I0711 00:04:31.841578 2472 kubelet.go:314] "Adding apiserver pod source" Jul 11 00:04:31.841626 kubelet[2472]: I0711 00:04:31.841590 2472 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 00:04:31.842115 kubelet[2472]: I0711 00:04:31.842098 2472 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 11 00:04:31.843347 kubelet[2472]: I0711 00:04:31.843239 2472 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 00:04:31.844476 kubelet[2472]: I0711 00:04:31.844454 2472 server.go:1274] "Started kubelet" Jul 11 00:04:31.844767 kubelet[2472]: I0711 00:04:31.844733 2472 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 00:04:31.845324 kubelet[2472]: I0711 00:04:31.845169 2472 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 00:04:31.845677 kubelet[2472]: I0711 00:04:31.845654 2472 server.go:449] "Adding debug handlers to kubelet server" Jul 11 00:04:31.845896 kubelet[2472]: I0711 00:04:31.845881 2472 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 00:04:31.846413 kubelet[2472]: I0711 00:04:31.846379 2472 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 00:04:31.846522 kubelet[2472]: I0711 00:04:31.846435 2472 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 00:04:31.846618 kubelet[2472]: I0711 00:04:31.846596 2472 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 11 00:04:31.846900 kubelet[2472]: I0711 00:04:31.846695 2472 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 11 00:04:31.846900 kubelet[2472]: I0711 00:04:31.846802 2472 reconciler.go:26] "Reconciler: start to sync state" Jul 11 00:04:31.849206 kubelet[2472]: I0711 00:04:31.848727 2472 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 00:04:31.849526 kubelet[2472]: E0711 00:04:31.849500 2472 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 00:04:31.858140 kubelet[2472]: I0711 00:04:31.853861 2472 factory.go:221] Registration of the containerd container factory successfully Jul 11 00:04:31.858140 kubelet[2472]: I0711 00:04:31.853885 2472 factory.go:221] Registration of the systemd container factory successfully Jul 11 00:04:31.879435 kubelet[2472]: I0711 00:04:31.879360 2472 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 00:04:31.881694 kubelet[2472]: I0711 00:04:31.881319 2472 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 00:04:31.881694 kubelet[2472]: I0711 00:04:31.881349 2472 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 11 00:04:31.881694 kubelet[2472]: I0711 00:04:31.881367 2472 kubelet.go:2321] "Starting kubelet main sync loop" Jul 11 00:04:31.881694 kubelet[2472]: E0711 00:04:31.881411 2472 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 00:04:31.898895 kubelet[2472]: I0711 00:04:31.898872 2472 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 11 00:04:31.899337 kubelet[2472]: I0711 00:04:31.899057 2472 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 11 00:04:31.899337 kubelet[2472]: I0711 00:04:31.899085 2472 state_mem.go:36] "Initialized new in-memory state store" Jul 11 00:04:31.899337 kubelet[2472]: I0711 00:04:31.899226 2472 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 00:04:31.899337 kubelet[2472]: I0711 00:04:31.899237 2472 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 00:04:31.899337 kubelet[2472]: I0711 00:04:31.899253 2472 policy_none.go:49] "None policy: Start" Jul 11 00:04:31.899860 kubelet[2472]: I0711 00:04:31.899829 2472 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 11 00:04:31.899899 kubelet[2472]: I0711 00:04:31.899878 2472 state_mem.go:35] "Initializing new in-memory state store" Jul 11 00:04:31.900047 kubelet[2472]: I0711 00:04:31.900028 2472 state_mem.go:75] "Updated machine memory state" Jul 11 00:04:31.904264 kubelet[2472]: I0711 00:04:31.904192 2472 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 00:04:31.904359 kubelet[2472]: I0711 00:04:31.904341 2472 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 00:04:31.904389 kubelet[2472]: I0711 00:04:31.904357 2472 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 00:04:31.904616 kubelet[2472]: I0711 00:04:31.904597 2472 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 00:04:31.991613 kubelet[2472]: E0711 00:04:31.991450 2472 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:04:32.009117 kubelet[2472]: I0711 00:04:32.009082 2472 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 11 00:04:32.014959 kubelet[2472]: I0711 00:04:32.014917 2472 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 11 00:04:32.015062 kubelet[2472]: I0711 00:04:32.015005 2472 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 11 00:04:32.148508 kubelet[2472]: I0711 00:04:32.148452 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:04:32.148508 kubelet[2472]: I0711 00:04:32.148496 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:04:32.148508 kubelet[2472]: I0711 00:04:32.148518 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:04:32.148738 kubelet[2472]: I0711 00:04:32.148535 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:04:32.148738 kubelet[2472]: I0711 00:04:32.148551 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 00:04:32.148738 kubelet[2472]: I0711 00:04:32.148567 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 11 00:04:32.148738 kubelet[2472]: I0711 00:04:32.148582 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b1ed83b8429578aa086d05ef80c51fe-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b1ed83b8429578aa086d05ef80c51fe\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:04:32.148738 kubelet[2472]: I0711 00:04:32.148596 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b1ed83b8429578aa086d05ef80c51fe-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1b1ed83b8429578aa086d05ef80c51fe\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:04:32.148872 kubelet[2472]: I0711 00:04:32.148611 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b1ed83b8429578aa086d05ef80c51fe-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1b1ed83b8429578aa086d05ef80c51fe\") " pod="kube-system/kube-apiserver-localhost" Jul 11 00:04:32.292196 kubelet[2472]: E0711 00:04:32.292092 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:32.292196 kubelet[2472]: E0711 00:04:32.292130 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:32.292310 kubelet[2472]: E0711 00:04:32.292250 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:32.842665 kubelet[2472]: I0711 00:04:32.842409 2472 apiserver.go:52] "Watching apiserver" Jul 11 00:04:32.847052 kubelet[2472]: I0711 00:04:32.847019 2472 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 11 00:04:32.892187 kubelet[2472]: E0711 00:04:32.891966 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:32.892832 kubelet[2472]: E0711 00:04:32.892811 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:32.898538 kubelet[2472]: E0711 00:04:32.898480 2472 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 00:04:32.899772 kubelet[2472]: E0711 00:04:32.899749 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:32.918622 kubelet[2472]: I0711 00:04:32.918544 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.9185275819999998 podStartE2EDuration="1.918527582s" podCreationTimestamp="2025-07-11 00:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:04:32.910805262 +0000 UTC m=+1.123610008" watchObservedRunningTime="2025-07-11 00:04:32.918527582 +0000 UTC m=+1.131332328" Jul 11 00:04:32.926879 kubelet[2472]: I0711 00:04:32.926814 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.926802267 podStartE2EDuration="1.926802267s" podCreationTimestamp="2025-07-11 00:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:04:32.926761262 +0000 UTC m=+1.139566008" watchObservedRunningTime="2025-07-11 00:04:32.926802267 +0000 UTC m=+1.139607013" Jul 11 00:04:32.927027 kubelet[2472]: I0711 00:04:32.926916 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.926912172 podStartE2EDuration="1.926912172s" podCreationTimestamp="2025-07-11 00:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:04:32.918672537 +0000 UTC m=+1.131477283" watchObservedRunningTime="2025-07-11 00:04:32.926912172 +0000 UTC m=+1.139716958" Jul 11 00:04:33.893944 kubelet[2472]: E0711 00:04:33.893839 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:35.446309 kubelet[2472]: E0711 00:04:35.446207 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:36.386101 kubelet[2472]: E0711 00:04:36.386043 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:37.746289 kubelet[2472]: I0711 00:04:37.746248 2472 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 00:04:37.746694 containerd[1432]: time="2025-07-11T00:04:37.746647095Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 00:04:37.746906 kubelet[2472]: I0711 00:04:37.746872 2472 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 00:04:38.830404 systemd[1]: Created slice kubepods-besteffort-pode02ee308_04af_4529_9160_8eb413705349.slice - libcontainer container kubepods-besteffort-pode02ee308_04af_4529_9160_8eb413705349.slice. Jul 11 00:04:38.898468 kubelet[2472]: I0711 00:04:38.898426 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e02ee308-04af-4529-9160-8eb413705349-xtables-lock\") pod \"kube-proxy-lffnv\" (UID: \"e02ee308-04af-4529-9160-8eb413705349\") " pod="kube-system/kube-proxy-lffnv" Jul 11 00:04:38.898468 kubelet[2472]: I0711 00:04:38.898467 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zhpf\" (UniqueName: \"kubernetes.io/projected/e02ee308-04af-4529-9160-8eb413705349-kube-api-access-9zhpf\") pod \"kube-proxy-lffnv\" (UID: \"e02ee308-04af-4529-9160-8eb413705349\") " pod="kube-system/kube-proxy-lffnv" Jul 11 00:04:38.898841 kubelet[2472]: I0711 00:04:38.898491 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e02ee308-04af-4529-9160-8eb413705349-kube-proxy\") pod \"kube-proxy-lffnv\" (UID: \"e02ee308-04af-4529-9160-8eb413705349\") " pod="kube-system/kube-proxy-lffnv" Jul 11 00:04:38.898841 kubelet[2472]: I0711 00:04:38.898509 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e02ee308-04af-4529-9160-8eb413705349-lib-modules\") pod \"kube-proxy-lffnv\" (UID: \"e02ee308-04af-4529-9160-8eb413705349\") " pod="kube-system/kube-proxy-lffnv" Jul 11 00:04:38.926724 systemd[1]: Created slice kubepods-besteffort-pode23ecde4_7534_4a73_9d88_8b3f34b4e04a.slice - libcontainer container kubepods-besteffort-pode23ecde4_7534_4a73_9d88_8b3f34b4e04a.slice. Jul 11 00:04:38.999235 kubelet[2472]: I0711 00:04:38.999188 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e23ecde4-7534-4a73-9d88-8b3f34b4e04a-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-4qxzl\" (UID: \"e23ecde4-7534-4a73-9d88-8b3f34b4e04a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-4qxzl" Jul 11 00:04:38.999235 kubelet[2472]: I0711 00:04:38.999239 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxzzp\" (UniqueName: \"kubernetes.io/projected/e23ecde4-7534-4a73-9d88-8b3f34b4e04a-kube-api-access-lxzzp\") pod \"tigera-operator-5bf8dfcb4-4qxzl\" (UID: \"e23ecde4-7534-4a73-9d88-8b3f34b4e04a\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-4qxzl" Jul 11 00:04:39.141988 kubelet[2472]: E0711 00:04:39.141946 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:39.142518 containerd[1432]: time="2025-07-11T00:04:39.142485190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lffnv,Uid:e02ee308-04af-4529-9160-8eb413705349,Namespace:kube-system,Attempt:0,}" Jul 11 00:04:39.162553 containerd[1432]: time="2025-07-11T00:04:39.162321261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:04:39.162679 containerd[1432]: time="2025-07-11T00:04:39.162620422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:04:39.162679 containerd[1432]: time="2025-07-11T00:04:39.162638544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:39.162868 containerd[1432]: time="2025-07-11T00:04:39.162820249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:39.180950 systemd[1]: Started cri-containerd-df4a9a5ed485b0307fb02584f3984bd976e0bc2f2510cb2aaab05988009d2707.scope - libcontainer container df4a9a5ed485b0307fb02584f3984bd976e0bc2f2510cb2aaab05988009d2707. Jul 11 00:04:39.200934 containerd[1432]: time="2025-07-11T00:04:39.200895133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lffnv,Uid:e02ee308-04af-4529-9160-8eb413705349,Namespace:kube-system,Attempt:0,} returns sandbox id \"df4a9a5ed485b0307fb02584f3984bd976e0bc2f2510cb2aaab05988009d2707\"" Jul 11 00:04:39.201839 kubelet[2472]: E0711 00:04:39.201635 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:39.203597 containerd[1432]: time="2025-07-11T00:04:39.203480006Z" level=info msg="CreateContainer within sandbox \"df4a9a5ed485b0307fb02584f3984bd976e0bc2f2510cb2aaab05988009d2707\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 00:04:39.215828 containerd[1432]: time="2025-07-11T00:04:39.215789969Z" level=info msg="CreateContainer within sandbox \"df4a9a5ed485b0307fb02584f3984bd976e0bc2f2510cb2aaab05988009d2707\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"737751764964d548b1bce93d0e53671a801fddfc2e5988dd5c5a195da4c07b4f\"" Jul 11 00:04:39.216876 containerd[1432]: time="2025-07-11T00:04:39.216425416Z" level=info msg="StartContainer for \"737751764964d548b1bce93d0e53671a801fddfc2e5988dd5c5a195da4c07b4f\"" Jul 11 00:04:39.231404 containerd[1432]: time="2025-07-11T00:04:39.231352696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-4qxzl,Uid:e23ecde4-7534-4a73-9d88-8b3f34b4e04a,Namespace:tigera-operator,Attempt:0,}" Jul 11 00:04:39.243039 systemd[1]: Started cri-containerd-737751764964d548b1bce93d0e53671a801fddfc2e5988dd5c5a195da4c07b4f.scope - libcontainer container 737751764964d548b1bce93d0e53671a801fddfc2e5988dd5c5a195da4c07b4f. Jul 11 00:04:39.250117 containerd[1432]: time="2025-07-11T00:04:39.250022647Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:04:39.250117 containerd[1432]: time="2025-07-11T00:04:39.250082536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:04:39.250117 containerd[1432]: time="2025-07-11T00:04:39.250097738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:39.250309 containerd[1432]: time="2025-07-11T00:04:39.250168707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:39.271013 systemd[1]: Started cri-containerd-952563484bb2f138e86628ce5350ec7878b83f2d5d8f7ad22b0572d777704224.scope - libcontainer container 952563484bb2f138e86628ce5350ec7878b83f2d5d8f7ad22b0572d777704224. Jul 11 00:04:39.282779 containerd[1432]: time="2025-07-11T00:04:39.282739839Z" level=info msg="StartContainer for \"737751764964d548b1bce93d0e53671a801fddfc2e5988dd5c5a195da4c07b4f\" returns successfully" Jul 11 00:04:39.303835 containerd[1432]: time="2025-07-11T00:04:39.303789956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-4qxzl,Uid:e23ecde4-7534-4a73-9d88-8b3f34b4e04a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"952563484bb2f138e86628ce5350ec7878b83f2d5d8f7ad22b0572d777704224\"" Jul 11 00:04:39.306958 containerd[1432]: time="2025-07-11T00:04:39.306918423Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 00:04:39.903906 kubelet[2472]: E0711 00:04:39.903872 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:39.914873 kubelet[2472]: I0711 00:04:39.912966 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lffnv" podStartSLOduration=1.91295009 podStartE2EDuration="1.91295009s" podCreationTimestamp="2025-07-11 00:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:04:39.912843556 +0000 UTC m=+8.125648302" watchObservedRunningTime="2025-07-11 00:04:39.91295009 +0000 UTC m=+8.125754836" Jul 11 00:04:40.332608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount321121575.mount: Deactivated successfully. Jul 11 00:04:41.105077 containerd[1432]: time="2025-07-11T00:04:41.105028098Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:41.106010 containerd[1432]: time="2025-07-11T00:04:41.105795232Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 11 00:04:41.108056 containerd[1432]: time="2025-07-11T00:04:41.106997899Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:41.108856 containerd[1432]: time="2025-07-11T00:04:41.108812521Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:41.109758 containerd[1432]: time="2025-07-11T00:04:41.109722713Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.802768765s" Jul 11 00:04:41.109758 containerd[1432]: time="2025-07-11T00:04:41.109756717Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 11 00:04:41.111552 containerd[1432]: time="2025-07-11T00:04:41.111522453Z" level=info msg="CreateContainer within sandbox \"952563484bb2f138e86628ce5350ec7878b83f2d5d8f7ad22b0572d777704224\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 00:04:41.120162 containerd[1432]: time="2025-07-11T00:04:41.120120706Z" level=info msg="CreateContainer within sandbox \"952563484bb2f138e86628ce5350ec7878b83f2d5d8f7ad22b0572d777704224\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bf8c27afe84e3063e9b6ada24229c7bd02af128f1f627698ea58cf54345da08d\"" Jul 11 00:04:41.120784 containerd[1432]: time="2025-07-11T00:04:41.120740702Z" level=info msg="StartContainer for \"bf8c27afe84e3063e9b6ada24229c7bd02af128f1f627698ea58cf54345da08d\"" Jul 11 00:04:41.151052 systemd[1]: Started cri-containerd-bf8c27afe84e3063e9b6ada24229c7bd02af128f1f627698ea58cf54345da08d.scope - libcontainer container bf8c27afe84e3063e9b6ada24229c7bd02af128f1f627698ea58cf54345da08d. Jul 11 00:04:41.175485 containerd[1432]: time="2025-07-11T00:04:41.175422036Z" level=info msg="StartContainer for \"bf8c27afe84e3063e9b6ada24229c7bd02af128f1f627698ea58cf54345da08d\" returns successfully" Jul 11 00:04:41.929562 kubelet[2472]: I0711 00:04:41.929495 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-4qxzl" podStartSLOduration=2.1241498500000002 podStartE2EDuration="3.929478874s" podCreationTimestamp="2025-07-11 00:04:38 +0000 UTC" firstStartedPulling="2025-07-11 00:04:39.305125578 +0000 UTC m=+7.517930324" lastFinishedPulling="2025-07-11 00:04:41.110454642 +0000 UTC m=+9.323259348" observedRunningTime="2025-07-11 00:04:41.929206801 +0000 UTC m=+10.142011547" watchObservedRunningTime="2025-07-11 00:04:41.929478874 +0000 UTC m=+10.142283620" Jul 11 00:04:42.544734 kubelet[2472]: E0711 00:04:42.544373 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:45.453336 kubelet[2472]: E0711 00:04:45.453299 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:46.429298 kubelet[2472]: E0711 00:04:46.429265 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:46.511727 sudo[1611]: pam_unix(sudo:session): session closed for user root Jul 11 00:04:46.518047 sshd[1608]: pam_unix(sshd:session): session closed for user core Jul 11 00:04:46.526099 systemd-logind[1416]: Session 7 logged out. Waiting for processes to exit. Jul 11 00:04:46.527660 systemd[1]: sshd@6-10.0.0.27:22-10.0.0.1:51964.service: Deactivated successfully. Jul 11 00:04:46.529791 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 00:04:46.530189 systemd[1]: session-7.scope: Consumed 6.330s CPU time, 150.6M memory peak, 0B memory swap peak. Jul 11 00:04:46.531294 systemd-logind[1416]: Removed session 7. Jul 11 00:04:48.136697 update_engine[1421]: I20250711 00:04:48.136116 1421 update_attempter.cc:509] Updating boot flags... Jul 11 00:04:48.225891 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2879) Jul 11 00:04:48.284876 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2880) Jul 11 00:04:48.350858 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2880) Jul 11 00:04:52.281195 kubelet[2472]: W0711 00:04:52.281153 2472 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jul 11 00:04:52.281195 kubelet[2472]: W0711 00:04:52.281170 2472 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jul 11 00:04:52.283418 kubelet[2472]: W0711 00:04:52.283389 2472 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jul 11 00:04:52.287368 kubelet[2472]: E0711 00:04:52.287332 2472 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:04:52.287536 kubelet[2472]: E0711 00:04:52.287483 2472 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:04:52.288328 kubelet[2472]: E0711 00:04:52.288289 2472 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 11 00:04:52.302231 systemd[1]: Created slice kubepods-besteffort-podf0f514cb_4235_481c_9d0f_731b37f47a86.slice - libcontainer container kubepods-besteffort-podf0f514cb_4235_481c_9d0f_731b37f47a86.slice. Jul 11 00:04:52.388357 kubelet[2472]: I0711 00:04:52.388311 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0f514cb-4235-481c-9d0f-731b37f47a86-tigera-ca-bundle\") pod \"calico-typha-5d488d4687-lln96\" (UID: \"f0f514cb-4235-481c-9d0f-731b37f47a86\") " pod="calico-system/calico-typha-5d488d4687-lln96" Jul 11 00:04:52.388357 kubelet[2472]: I0711 00:04:52.388361 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8ljm\" (UniqueName: \"kubernetes.io/projected/f0f514cb-4235-481c-9d0f-731b37f47a86-kube-api-access-s8ljm\") pod \"calico-typha-5d488d4687-lln96\" (UID: \"f0f514cb-4235-481c-9d0f-731b37f47a86\") " pod="calico-system/calico-typha-5d488d4687-lln96" Jul 11 00:04:52.388597 kubelet[2472]: I0711 00:04:52.388384 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f0f514cb-4235-481c-9d0f-731b37f47a86-typha-certs\") pod \"calico-typha-5d488d4687-lln96\" (UID: \"f0f514cb-4235-481c-9d0f-731b37f47a86\") " pod="calico-system/calico-typha-5d488d4687-lln96" Jul 11 00:04:52.627793 systemd[1]: Created slice kubepods-besteffort-pod67daa17d_8279_4053_87d9_3fe4fd192fc7.slice - libcontainer container kubepods-besteffort-pod67daa17d_8279_4053_87d9_3fe4fd192fc7.slice. Jul 11 00:04:52.790571 kubelet[2472]: I0711 00:04:52.790536 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/67daa17d-8279-4053-87d9-3fe4fd192fc7-cni-bin-dir\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790571 kubelet[2472]: I0711 00:04:52.790575 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/67daa17d-8279-4053-87d9-3fe4fd192fc7-tigera-ca-bundle\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790725 kubelet[2472]: I0711 00:04:52.790593 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67daa17d-8279-4053-87d9-3fe4fd192fc7-var-lib-calico\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790725 kubelet[2472]: I0711 00:04:52.790611 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/67daa17d-8279-4053-87d9-3fe4fd192fc7-node-certs\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790725 kubelet[2472]: I0711 00:04:52.790640 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/67daa17d-8279-4053-87d9-3fe4fd192fc7-var-run-calico\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790725 kubelet[2472]: I0711 00:04:52.790654 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67daa17d-8279-4053-87d9-3fe4fd192fc7-xtables-lock\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790725 kubelet[2472]: I0711 00:04:52.790679 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8xwm\" (UniqueName: \"kubernetes.io/projected/67daa17d-8279-4053-87d9-3fe4fd192fc7-kube-api-access-v8xwm\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790892 kubelet[2472]: I0711 00:04:52.790712 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/67daa17d-8279-4053-87d9-3fe4fd192fc7-cni-net-dir\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790892 kubelet[2472]: I0711 00:04:52.790729 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/67daa17d-8279-4053-87d9-3fe4fd192fc7-flexvol-driver-host\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790892 kubelet[2472]: I0711 00:04:52.790744 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/67daa17d-8279-4053-87d9-3fe4fd192fc7-cni-log-dir\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790892 kubelet[2472]: I0711 00:04:52.790759 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67daa17d-8279-4053-87d9-3fe4fd192fc7-lib-modules\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.790892 kubelet[2472]: I0711 00:04:52.790777 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/67daa17d-8279-4053-87d9-3fe4fd192fc7-policysync\") pod \"calico-node-r2d7q\" (UID: \"67daa17d-8279-4053-87d9-3fe4fd192fc7\") " pod="calico-system/calico-node-r2d7q" Jul 11 00:04:52.900073 kubelet[2472]: E0711 00:04:52.899975 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.900073 kubelet[2472]: W0711 00:04:52.900000 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.910194 kubelet[2472]: E0711 00:04:52.910142 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.921927 kubelet[2472]: E0711 00:04:52.921862 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8t882" podUID="69b81cc5-c8b0-45b3-aeb9-88292bebdc48" Jul 11 00:04:52.993189 kubelet[2472]: E0711 00:04:52.993157 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.993600 kubelet[2472]: W0711 00:04:52.993294 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.993600 kubelet[2472]: E0711 00:04:52.993316 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.993779 kubelet[2472]: E0711 00:04:52.993710 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.993779 kubelet[2472]: W0711 00:04:52.993721 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.993779 kubelet[2472]: E0711 00:04:52.993732 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.994235 kubelet[2472]: E0711 00:04:52.994130 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.994445 kubelet[2472]: W0711 00:04:52.994316 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.994445 kubelet[2472]: E0711 00:04:52.994335 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.994630 kubelet[2472]: E0711 00:04:52.994618 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.994708 kubelet[2472]: W0711 00:04:52.994697 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.994778 kubelet[2472]: E0711 00:04:52.994754 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.995596 kubelet[2472]: E0711 00:04:52.995579 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.995775 kubelet[2472]: W0711 00:04:52.995758 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.996251 kubelet[2472]: E0711 00:04:52.995874 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.996919 kubelet[2472]: E0711 00:04:52.996163 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.996919 kubelet[2472]: W0711 00:04:52.996530 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.996919 kubelet[2472]: E0711 00:04:52.996555 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.997075 kubelet[2472]: E0711 00:04:52.997057 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.997075 kubelet[2472]: W0711 00:04:52.997072 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.997130 kubelet[2472]: E0711 00:04:52.997083 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.997636 kubelet[2472]: E0711 00:04:52.997603 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.997636 kubelet[2472]: W0711 00:04:52.997619 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.997636 kubelet[2472]: E0711 00:04:52.997634 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.999299 kubelet[2472]: E0711 00:04:52.998512 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.999346 kubelet[2472]: W0711 00:04:52.999311 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.999346 kubelet[2472]: E0711 00:04:52.999328 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.999689 kubelet[2472]: E0711 00:04:52.999656 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:52.999689 kubelet[2472]: W0711 00:04:52.999688 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:52.999754 kubelet[2472]: E0711 00:04:52.999701 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:52.999983 kubelet[2472]: E0711 00:04:52.999961 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.000028 kubelet[2472]: W0711 00:04:52.999988 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.000028 kubelet[2472]: E0711 00:04:52.999998 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.000182 kubelet[2472]: E0711 00:04:53.000168 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.000212 kubelet[2472]: W0711 00:04:53.000182 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.000212 kubelet[2472]: E0711 00:04:53.000191 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.000342 kubelet[2472]: E0711 00:04:53.000324 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.000368 kubelet[2472]: W0711 00:04:53.000342 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.000368 kubelet[2472]: E0711 00:04:53.000350 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.000712 kubelet[2472]: E0711 00:04:53.000695 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.000712 kubelet[2472]: W0711 00:04:53.000709 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.000777 kubelet[2472]: E0711 00:04:53.000720 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.000969 kubelet[2472]: E0711 00:04:53.000939 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.000969 kubelet[2472]: W0711 00:04:53.000952 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.000969 kubelet[2472]: E0711 00:04:53.000963 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.001170 kubelet[2472]: E0711 00:04:53.001155 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.001170 kubelet[2472]: W0711 00:04:53.001168 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.001223 kubelet[2472]: E0711 00:04:53.001178 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.001330 kubelet[2472]: E0711 00:04:53.001321 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.001361 kubelet[2472]: W0711 00:04:53.001330 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.001361 kubelet[2472]: E0711 00:04:53.001338 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.001504 kubelet[2472]: E0711 00:04:53.001489 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.001504 kubelet[2472]: W0711 00:04:53.001499 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.002004 kubelet[2472]: E0711 00:04:53.001507 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.002546 kubelet[2472]: E0711 00:04:53.002520 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.002600 kubelet[2472]: W0711 00:04:53.002554 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.002600 kubelet[2472]: E0711 00:04:53.002569 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.002769 kubelet[2472]: E0711 00:04:53.002756 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.002769 kubelet[2472]: W0711 00:04:53.002767 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.002818 kubelet[2472]: E0711 00:04:53.002776 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.002953 kubelet[2472]: E0711 00:04:53.002938 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.002953 kubelet[2472]: W0711 00:04:53.002950 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.003030 kubelet[2472]: E0711 00:04:53.002958 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.003141 kubelet[2472]: E0711 00:04:53.003128 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.003141 kubelet[2472]: W0711 00:04:53.003139 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.003193 kubelet[2472]: E0711 00:04:53.003147 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.003292 kubelet[2472]: E0711 00:04:53.003281 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.003292 kubelet[2472]: W0711 00:04:53.003290 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.003345 kubelet[2472]: E0711 00:04:53.003297 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.003460 kubelet[2472]: E0711 00:04:53.003448 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.003460 kubelet[2472]: W0711 00:04:53.003459 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.003509 kubelet[2472]: E0711 00:04:53.003467 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.003670 kubelet[2472]: E0711 00:04:53.003658 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.003705 kubelet[2472]: W0711 00:04:53.003670 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.003705 kubelet[2472]: E0711 00:04:53.003679 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.104825 kubelet[2472]: E0711 00:04:53.104641 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.104825 kubelet[2472]: W0711 00:04:53.104682 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.104825 kubelet[2472]: E0711 00:04:53.104704 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.104825 kubelet[2472]: I0711 00:04:53.104730 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69b81cc5-c8b0-45b3-aeb9-88292bebdc48-kubelet-dir\") pod \"csi-node-driver-8t882\" (UID: \"69b81cc5-c8b0-45b3-aeb9-88292bebdc48\") " pod="calico-system/csi-node-driver-8t882" Jul 11 00:04:53.106157 kubelet[2472]: E0711 00:04:53.105982 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.106157 kubelet[2472]: W0711 00:04:53.106002 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.106157 kubelet[2472]: E0711 00:04:53.106028 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.106402 kubelet[2472]: E0711 00:04:53.106358 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.106402 kubelet[2472]: W0711 00:04:53.106373 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.106471 kubelet[2472]: E0711 00:04:53.106405 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.106471 kubelet[2472]: I0711 00:04:53.106443 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69b81cc5-c8b0-45b3-aeb9-88292bebdc48-socket-dir\") pod \"csi-node-driver-8t882\" (UID: \"69b81cc5-c8b0-45b3-aeb9-88292bebdc48\") " pod="calico-system/csi-node-driver-8t882" Jul 11 00:04:53.106819 kubelet[2472]: E0711 00:04:53.106719 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.106819 kubelet[2472]: W0711 00:04:53.106731 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.106819 kubelet[2472]: E0711 00:04:53.106752 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.107233 kubelet[2472]: E0711 00:04:53.107094 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.107233 kubelet[2472]: W0711 00:04:53.107107 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.107233 kubelet[2472]: E0711 00:04:53.107136 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.107516 kubelet[2472]: E0711 00:04:53.107397 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.107516 kubelet[2472]: W0711 00:04:53.107412 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.107516 kubelet[2472]: E0711 00:04:53.107424 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.107696 kubelet[2472]: E0711 00:04:53.107682 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.107767 kubelet[2472]: W0711 00:04:53.107745 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.107929 kubelet[2472]: E0711 00:04:53.107839 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.107929 kubelet[2472]: I0711 00:04:53.107898 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/69b81cc5-c8b0-45b3-aeb9-88292bebdc48-varrun\") pod \"csi-node-driver-8t882\" (UID: \"69b81cc5-c8b0-45b3-aeb9-88292bebdc48\") " pod="calico-system/csi-node-driver-8t882" Jul 11 00:04:53.108238 kubelet[2472]: E0711 00:04:53.108177 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.108238 kubelet[2472]: W0711 00:04:53.108192 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.108238 kubelet[2472]: E0711 00:04:53.108209 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.108576 kubelet[2472]: E0711 00:04:53.108481 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.108576 kubelet[2472]: W0711 00:04:53.108492 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.108576 kubelet[2472]: E0711 00:04:53.108513 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.108997 kubelet[2472]: E0711 00:04:53.108821 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.108997 kubelet[2472]: W0711 00:04:53.108834 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.108997 kubelet[2472]: E0711 00:04:53.108941 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.109416 kubelet[2472]: E0711 00:04:53.109275 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.109416 kubelet[2472]: W0711 00:04:53.109287 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.109416 kubelet[2472]: E0711 00:04:53.109329 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.109416 kubelet[2472]: I0711 00:04:53.109357 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69b81cc5-c8b0-45b3-aeb9-88292bebdc48-registration-dir\") pod \"csi-node-driver-8t882\" (UID: \"69b81cc5-c8b0-45b3-aeb9-88292bebdc48\") " pod="calico-system/csi-node-driver-8t882" Jul 11 00:04:53.110664 kubelet[2472]: E0711 00:04:53.110529 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.110664 kubelet[2472]: W0711 00:04:53.110557 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.110664 kubelet[2472]: E0711 00:04:53.110572 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.110909 kubelet[2472]: E0711 00:04:53.110877 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.110909 kubelet[2472]: W0711 00:04:53.110891 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.111723 kubelet[2472]: E0711 00:04:53.111681 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.111889 kubelet[2472]: E0711 00:04:53.111867 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.111889 kubelet[2472]: W0711 00:04:53.111886 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.111957 kubelet[2472]: E0711 00:04:53.111914 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.112183 kubelet[2472]: E0711 00:04:53.112154 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.112183 kubelet[2472]: W0711 00:04:53.112168 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.112257 kubelet[2472]: E0711 00:04:53.112217 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.112423 kubelet[2472]: E0711 00:04:53.112401 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.112423 kubelet[2472]: W0711 00:04:53.112416 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.112495 kubelet[2472]: E0711 00:04:53.112432 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.112700 kubelet[2472]: E0711 00:04:53.112681 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.112700 kubelet[2472]: W0711 00:04:53.112697 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.112755 kubelet[2472]: E0711 00:04:53.112707 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.113142 kubelet[2472]: E0711 00:04:53.113122 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.113142 kubelet[2472]: W0711 00:04:53.113139 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.113215 kubelet[2472]: E0711 00:04:53.113152 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.113215 kubelet[2472]: I0711 00:04:53.113188 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxsmf\" (UniqueName: \"kubernetes.io/projected/69b81cc5-c8b0-45b3-aeb9-88292bebdc48-kube-api-access-pxsmf\") pod \"csi-node-driver-8t882\" (UID: \"69b81cc5-c8b0-45b3-aeb9-88292bebdc48\") " pod="calico-system/csi-node-driver-8t882" Jul 11 00:04:53.113476 kubelet[2472]: E0711 00:04:53.113460 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.113504 kubelet[2472]: W0711 00:04:53.113475 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.113504 kubelet[2472]: E0711 00:04:53.113486 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.113698 kubelet[2472]: E0711 00:04:53.113681 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.113698 kubelet[2472]: W0711 00:04:53.113697 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.113756 kubelet[2472]: E0711 00:04:53.113707 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.213722 kubelet[2472]: E0711 00:04:53.213612 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.213722 kubelet[2472]: W0711 00:04:53.213634 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.213722 kubelet[2472]: E0711 00:04:53.213654 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.213945 kubelet[2472]: E0711 00:04:53.213909 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.213945 kubelet[2472]: W0711 00:04:53.213919 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.213945 kubelet[2472]: E0711 00:04:53.213934 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.214228 kubelet[2472]: E0711 00:04:53.214117 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.214228 kubelet[2472]: W0711 00:04:53.214127 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.214228 kubelet[2472]: E0711 00:04:53.214141 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.214502 kubelet[2472]: E0711 00:04:53.214307 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.214502 kubelet[2472]: W0711 00:04:53.214315 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.214502 kubelet[2472]: E0711 00:04:53.214327 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.214993 kubelet[2472]: E0711 00:04:53.214509 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.214993 kubelet[2472]: W0711 00:04:53.214518 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.214993 kubelet[2472]: E0711 00:04:53.214533 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.214993 kubelet[2472]: E0711 00:04:53.214734 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.214993 kubelet[2472]: W0711 00:04:53.214742 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.214993 kubelet[2472]: E0711 00:04:53.214760 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.214993 kubelet[2472]: E0711 00:04:53.214942 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.214993 kubelet[2472]: W0711 00:04:53.214949 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.214993 kubelet[2472]: E0711 00:04:53.214965 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.215734 kubelet[2472]: E0711 00:04:53.215215 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.215734 kubelet[2472]: W0711 00:04:53.215224 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.215734 kubelet[2472]: E0711 00:04:53.215307 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.215734 kubelet[2472]: E0711 00:04:53.215421 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.215734 kubelet[2472]: W0711 00:04:53.215428 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.215734 kubelet[2472]: E0711 00:04:53.215494 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.215734 kubelet[2472]: E0711 00:04:53.215614 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.215734 kubelet[2472]: W0711 00:04:53.215622 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.215734 kubelet[2472]: E0711 00:04:53.215638 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.215986 kubelet[2472]: E0711 00:04:53.215909 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.215986 kubelet[2472]: W0711 00:04:53.215917 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.215986 kubelet[2472]: E0711 00:04:53.215956 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.216090 kubelet[2472]: E0711 00:04:53.216070 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.216090 kubelet[2472]: W0711 00:04:53.216085 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.216153 kubelet[2472]: E0711 00:04:53.216140 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.216234 kubelet[2472]: E0711 00:04:53.216223 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.216234 kubelet[2472]: W0711 00:04:53.216233 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.216292 kubelet[2472]: E0711 00:04:53.216271 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.216395 kubelet[2472]: E0711 00:04:53.216383 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.216395 kubelet[2472]: W0711 00:04:53.216393 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.216443 kubelet[2472]: E0711 00:04:53.216411 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.216630 kubelet[2472]: E0711 00:04:53.216619 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.216630 kubelet[2472]: W0711 00:04:53.216630 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.216695 kubelet[2472]: E0711 00:04:53.216642 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.216840 kubelet[2472]: E0711 00:04:53.216822 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.216840 kubelet[2472]: W0711 00:04:53.216834 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.217184 kubelet[2472]: E0711 00:04:53.216942 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.217184 kubelet[2472]: E0711 00:04:53.217033 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.217184 kubelet[2472]: W0711 00:04:53.217043 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.217184 kubelet[2472]: E0711 00:04:53.217063 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.217286 kubelet[2472]: E0711 00:04:53.217216 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.217286 kubelet[2472]: W0711 00:04:53.217224 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.217330 kubelet[2472]: E0711 00:04:53.217308 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.217972 kubelet[2472]: E0711 00:04:53.217604 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.217972 kubelet[2472]: W0711 00:04:53.217619 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.217972 kubelet[2472]: E0711 00:04:53.217660 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.217972 kubelet[2472]: E0711 00:04:53.217788 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.217972 kubelet[2472]: W0711 00:04:53.217797 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.217972 kubelet[2472]: E0711 00:04:53.217857 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.218933 kubelet[2472]: E0711 00:04:53.218088 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.218933 kubelet[2472]: W0711 00:04:53.218096 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.218933 kubelet[2472]: E0711 00:04:53.218108 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.218933 kubelet[2472]: E0711 00:04:53.218303 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.218933 kubelet[2472]: W0711 00:04:53.218310 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.218933 kubelet[2472]: E0711 00:04:53.218350 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.218933 kubelet[2472]: E0711 00:04:53.218532 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.218933 kubelet[2472]: W0711 00:04:53.218539 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.218933 kubelet[2472]: E0711 00:04:53.218570 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.226178 kubelet[2472]: E0711 00:04:53.226145 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.226178 kubelet[2472]: W0711 00:04:53.226168 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.226322 kubelet[2472]: E0711 00:04:53.226229 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.226389 kubelet[2472]: E0711 00:04:53.226374 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.226428 kubelet[2472]: W0711 00:04:53.226390 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.226428 kubelet[2472]: E0711 00:04:53.226415 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.226569 kubelet[2472]: E0711 00:04:53.226528 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.226569 kubelet[2472]: W0711 00:04:53.226542 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.226627 kubelet[2472]: E0711 00:04:53.226586 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.226745 kubelet[2472]: E0711 00:04:53.226729 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.226745 kubelet[2472]: W0711 00:04:53.226743 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.226802 kubelet[2472]: E0711 00:04:53.226771 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.227074 kubelet[2472]: E0711 00:04:53.227048 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.227074 kubelet[2472]: W0711 00:04:53.227062 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.227138 kubelet[2472]: E0711 00:04:53.227088 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.227371 kubelet[2472]: E0711 00:04:53.227351 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.227371 kubelet[2472]: W0711 00:04:53.227370 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.227455 kubelet[2472]: E0711 00:04:53.227390 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.227740 kubelet[2472]: E0711 00:04:53.227717 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.227740 kubelet[2472]: W0711 00:04:53.227734 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.227801 kubelet[2472]: E0711 00:04:53.227748 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.317719 kubelet[2472]: E0711 00:04:53.317647 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.317719 kubelet[2472]: W0711 00:04:53.317691 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.317719 kubelet[2472]: E0711 00:04:53.317709 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.318173 kubelet[2472]: E0711 00:04:53.317936 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.318173 kubelet[2472]: W0711 00:04:53.317946 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.318173 kubelet[2472]: E0711 00:04:53.317956 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.318173 kubelet[2472]: E0711 00:04:53.318157 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.318269 kubelet[2472]: W0711 00:04:53.318182 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.318269 kubelet[2472]: E0711 00:04:53.318191 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.318795 kubelet[2472]: E0711 00:04:53.318651 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.318828 kubelet[2472]: W0711 00:04:53.318794 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.318913 kubelet[2472]: E0711 00:04:53.318879 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.319134 kubelet[2472]: E0711 00:04:53.319114 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.319134 kubelet[2472]: W0711 00:04:53.319127 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.319134 kubelet[2472]: E0711 00:04:53.319136 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.319351 kubelet[2472]: E0711 00:04:53.319337 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.319351 kubelet[2472]: W0711 00:04:53.319349 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.319398 kubelet[2472]: E0711 00:04:53.319358 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.420040 kubelet[2472]: E0711 00:04:53.420012 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.420040 kubelet[2472]: W0711 00:04:53.420034 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.420200 kubelet[2472]: E0711 00:04:53.420055 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.420269 kubelet[2472]: E0711 00:04:53.420253 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.420269 kubelet[2472]: W0711 00:04:53.420265 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.420330 kubelet[2472]: E0711 00:04:53.420273 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.420442 kubelet[2472]: E0711 00:04:53.420431 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.420470 kubelet[2472]: W0711 00:04:53.420442 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.420470 kubelet[2472]: E0711 00:04:53.420453 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.420612 kubelet[2472]: E0711 00:04:53.420602 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.420612 kubelet[2472]: W0711 00:04:53.420612 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.420664 kubelet[2472]: E0711 00:04:53.420620 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.420765 kubelet[2472]: E0711 00:04:53.420756 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.420765 kubelet[2472]: W0711 00:04:53.420766 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.420828 kubelet[2472]: E0711 00:04:53.420773 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.420953 kubelet[2472]: E0711 00:04:53.420942 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.420953 kubelet[2472]: W0711 00:04:53.420953 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.421002 kubelet[2472]: E0711 00:04:53.420961 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.490430 kubelet[2472]: E0711 00:04:53.490294 2472 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 11 00:04:53.490430 kubelet[2472]: E0711 00:04:53.490354 2472 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 11 00:04:53.490430 kubelet[2472]: E0711 00:04:53.490398 2472 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f0f514cb-4235-481c-9d0f-731b37f47a86-typha-certs podName:f0f514cb-4235-481c-9d0f-731b37f47a86 nodeName:}" failed. No retries permitted until 2025-07-11 00:04:53.990366413 +0000 UTC m=+22.203171119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/f0f514cb-4235-481c-9d0f-731b37f47a86-typha-certs") pod "calico-typha-5d488d4687-lln96" (UID: "f0f514cb-4235-481c-9d0f-731b37f47a86") : failed to sync secret cache: timed out waiting for the condition Jul 11 00:04:53.490430 kubelet[2472]: E0711 00:04:53.490418 2472 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f0f514cb-4235-481c-9d0f-731b37f47a86-tigera-ca-bundle podName:f0f514cb-4235-481c-9d0f-731b37f47a86 nodeName:}" failed. No retries permitted until 2025-07-11 00:04:53.990409736 +0000 UTC m=+22.203214482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/f0f514cb-4235-481c-9d0f-731b37f47a86-tigera-ca-bundle") pod "calico-typha-5d488d4687-lln96" (UID: "f0f514cb-4235-481c-9d0f-731b37f47a86") : failed to sync configmap cache: timed out waiting for the condition Jul 11 00:04:53.497148 kubelet[2472]: E0711 00:04:53.497112 2472 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 11 00:04:53.497148 kubelet[2472]: E0711 00:04:53.497144 2472 projected.go:194] Error preparing data for projected volume kube-api-access-s8ljm for pod calico-system/calico-typha-5d488d4687-lln96: failed to sync configmap cache: timed out waiting for the condition Jul 11 00:04:53.497291 kubelet[2472]: E0711 00:04:53.497203 2472 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0f514cb-4235-481c-9d0f-731b37f47a86-kube-api-access-s8ljm podName:f0f514cb-4235-481c-9d0f-731b37f47a86 nodeName:}" failed. No retries permitted until 2025-07-11 00:04:53.997184988 +0000 UTC m=+22.209989734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s8ljm" (UniqueName: "kubernetes.io/projected/f0f514cb-4235-481c-9d0f-731b37f47a86-kube-api-access-s8ljm") pod "calico-typha-5d488d4687-lln96" (UID: "f0f514cb-4235-481c-9d0f-731b37f47a86") : failed to sync configmap cache: timed out waiting for the condition Jul 11 00:04:53.522429 kubelet[2472]: E0711 00:04:53.522393 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.522429 kubelet[2472]: W0711 00:04:53.522419 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.522429 kubelet[2472]: E0711 00:04:53.522436 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.522684 kubelet[2472]: E0711 00:04:53.522662 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.522684 kubelet[2472]: W0711 00:04:53.522676 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.522684 kubelet[2472]: E0711 00:04:53.522684 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.522907 kubelet[2472]: E0711 00:04:53.522886 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.522936 kubelet[2472]: W0711 00:04:53.522907 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.522936 kubelet[2472]: E0711 00:04:53.522923 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.523148 kubelet[2472]: E0711 00:04:53.523129 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.523148 kubelet[2472]: W0711 00:04:53.523144 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.523198 kubelet[2472]: E0711 00:04:53.523153 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.523396 kubelet[2472]: E0711 00:04:53.523383 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.523396 kubelet[2472]: W0711 00:04:53.523396 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.523467 kubelet[2472]: E0711 00:04:53.523405 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.523624 kubelet[2472]: E0711 00:04:53.523612 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.523624 kubelet[2472]: W0711 00:04:53.523625 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.523680 kubelet[2472]: E0711 00:04:53.523634 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.524164 kubelet[2472]: E0711 00:04:53.524148 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.524194 kubelet[2472]: W0711 00:04:53.524165 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.524194 kubelet[2472]: E0711 00:04:53.524177 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.624959 kubelet[2472]: E0711 00:04:53.624922 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.624959 kubelet[2472]: W0711 00:04:53.624947 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.624959 kubelet[2472]: E0711 00:04:53.624966 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.625184 kubelet[2472]: E0711 00:04:53.625161 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.625184 kubelet[2472]: W0711 00:04:53.625173 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.625184 kubelet[2472]: E0711 00:04:53.625182 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.625361 kubelet[2472]: E0711 00:04:53.625343 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.625361 kubelet[2472]: W0711 00:04:53.625355 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.625415 kubelet[2472]: E0711 00:04:53.625364 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.625534 kubelet[2472]: E0711 00:04:53.625516 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.625534 kubelet[2472]: W0711 00:04:53.625528 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.625587 kubelet[2472]: E0711 00:04:53.625537 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.625719 kubelet[2472]: E0711 00:04:53.625701 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.625719 kubelet[2472]: W0711 00:04:53.625714 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.625762 kubelet[2472]: E0711 00:04:53.625722 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.670704 kubelet[2472]: E0711 00:04:53.670666 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.670704 kubelet[2472]: W0711 00:04:53.670689 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.670704 kubelet[2472]: E0711 00:04:53.670708 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.675631 kubelet[2472]: E0711 00:04:53.675543 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.675631 kubelet[2472]: W0711 00:04:53.675563 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.675631 kubelet[2472]: E0711 00:04:53.675590 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.726818 kubelet[2472]: E0711 00:04:53.726789 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.726818 kubelet[2472]: W0711 00:04:53.726814 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.726992 kubelet[2472]: E0711 00:04:53.726834 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.727112 kubelet[2472]: E0711 00:04:53.727097 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.727112 kubelet[2472]: W0711 00:04:53.727112 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.727164 kubelet[2472]: E0711 00:04:53.727123 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.727569 kubelet[2472]: E0711 00:04:53.727554 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.727569 kubelet[2472]: W0711 00:04:53.727569 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.727630 kubelet[2472]: E0711 00:04:53.727588 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.828730 kubelet[2472]: E0711 00:04:53.828638 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.828730 kubelet[2472]: W0711 00:04:53.828660 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.828730 kubelet[2472]: E0711 00:04:53.828680 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.828942 kubelet[2472]: E0711 00:04:53.828899 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.828942 kubelet[2472]: W0711 00:04:53.828909 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.828942 kubelet[2472]: E0711 00:04:53.828918 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.829299 kubelet[2472]: E0711 00:04:53.829280 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.829299 kubelet[2472]: W0711 00:04:53.829297 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.829299 kubelet[2472]: E0711 00:04:53.829339 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.831865 containerd[1432]: time="2025-07-11T00:04:53.831787465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r2d7q,Uid:67daa17d-8279-4053-87d9-3fe4fd192fc7,Namespace:calico-system,Attempt:0,}" Jul 11 00:04:53.855882 containerd[1432]: time="2025-07-11T00:04:53.855748903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:04:53.855882 containerd[1432]: time="2025-07-11T00:04:53.855825428Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:04:53.855882 containerd[1432]: time="2025-07-11T00:04:53.855843830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:53.856140 containerd[1432]: time="2025-07-11T00:04:53.855949477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:53.877019 systemd[1]: Started cri-containerd-c3e49f35ac8e5088255d07923ad188d00931c3d1cad85402dc90f91c36e759be.scope - libcontainer container c3e49f35ac8e5088255d07923ad188d00931c3d1cad85402dc90f91c36e759be. Jul 11 00:04:53.898650 containerd[1432]: time="2025-07-11T00:04:53.898524476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r2d7q,Uid:67daa17d-8279-4053-87d9-3fe4fd192fc7,Namespace:calico-system,Attempt:0,} returns sandbox id \"c3e49f35ac8e5088255d07923ad188d00931c3d1cad85402dc90f91c36e759be\"" Jul 11 00:04:53.911224 containerd[1432]: time="2025-07-11T00:04:53.911080394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 00:04:53.931046 kubelet[2472]: E0711 00:04:53.930873 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.931046 kubelet[2472]: W0711 00:04:53.930896 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.931046 kubelet[2472]: E0711 00:04:53.930958 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.931582 kubelet[2472]: E0711 00:04:53.931442 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.931582 kubelet[2472]: W0711 00:04:53.931456 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.931582 kubelet[2472]: E0711 00:04:53.931469 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:53.931844 kubelet[2472]: E0711 00:04:53.931715 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:53.931844 kubelet[2472]: W0711 00:04:53.931732 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:53.931844 kubelet[2472]: E0711 00:04:53.931742 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.032338 kubelet[2472]: E0711 00:04:54.032312 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.032338 kubelet[2472]: W0711 00:04:54.032332 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.032338 kubelet[2472]: E0711 00:04:54.032351 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.032670 kubelet[2472]: E0711 00:04:54.032596 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.032670 kubelet[2472]: W0711 00:04:54.032611 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.032670 kubelet[2472]: E0711 00:04:54.032625 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.032843 kubelet[2472]: E0711 00:04:54.032810 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.032843 kubelet[2472]: W0711 00:04:54.032819 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.032843 kubelet[2472]: E0711 00:04:54.032832 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.033046 kubelet[2472]: E0711 00:04:54.033027 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.033046 kubelet[2472]: W0711 00:04:54.033037 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.033111 kubelet[2472]: E0711 00:04:54.033050 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.033248 kubelet[2472]: E0711 00:04:54.033189 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.033248 kubelet[2472]: W0711 00:04:54.033196 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.033248 kubelet[2472]: E0711 00:04:54.033208 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.033352 kubelet[2472]: E0711 00:04:54.033331 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.033352 kubelet[2472]: W0711 00:04:54.033337 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.033352 kubelet[2472]: E0711 00:04:54.033349 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.033508 kubelet[2472]: E0711 00:04:54.033498 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.033508 kubelet[2472]: W0711 00:04:54.033507 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.033683 kubelet[2472]: E0711 00:04:54.033519 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.033825 kubelet[2472]: E0711 00:04:54.033801 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.033983 kubelet[2472]: W0711 00:04:54.033969 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.034124 kubelet[2472]: E0711 00:04:54.034073 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.034379 kubelet[2472]: E0711 00:04:54.034359 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.034553 kubelet[2472]: W0711 00:04:54.034448 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.034553 kubelet[2472]: E0711 00:04:54.034464 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.034714 kubelet[2472]: E0711 00:04:54.034701 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.034787 kubelet[2472]: W0711 00:04:54.034776 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.034864 kubelet[2472]: E0711 00:04:54.034833 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.035147 kubelet[2472]: E0711 00:04:54.035133 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.035532 kubelet[2472]: W0711 00:04:54.035185 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.035532 kubelet[2472]: E0711 00:04:54.035199 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.035917 kubelet[2472]: E0711 00:04:54.035900 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.035917 kubelet[2472]: W0711 00:04:54.035917 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.036022 kubelet[2472]: E0711 00:04:54.035931 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.039062 kubelet[2472]: E0711 00:04:54.039007 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.039062 kubelet[2472]: W0711 00:04:54.039024 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.039062 kubelet[2472]: E0711 00:04:54.039038 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.039231 kubelet[2472]: E0711 00:04:54.039204 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.039231 kubelet[2472]: W0711 00:04:54.039213 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.039359 kubelet[2472]: E0711 00:04:54.039261 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.039359 kubelet[2472]: E0711 00:04:54.039340 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.039359 kubelet[2472]: W0711 00:04:54.039348 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.039615 kubelet[2472]: E0711 00:04:54.039426 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.039615 kubelet[2472]: E0711 00:04:54.039468 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.039615 kubelet[2472]: W0711 00:04:54.039475 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.039615 kubelet[2472]: E0711 00:04:54.039545 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.039741 kubelet[2472]: E0711 00:04:54.039637 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.039741 kubelet[2472]: W0711 00:04:54.039646 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.039741 kubelet[2472]: E0711 00:04:54.039655 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.042574 kubelet[2472]: E0711 00:04:54.042415 2472 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 00:04:54.042574 kubelet[2472]: W0711 00:04:54.042433 2472 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 00:04:54.042574 kubelet[2472]: E0711 00:04:54.042448 2472 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 00:04:54.108839 kubelet[2472]: E0711 00:04:54.108741 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:54.109368 containerd[1432]: time="2025-07-11T00:04:54.109296652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d488d4687-lln96,Uid:f0f514cb-4235-481c-9d0f-731b37f47a86,Namespace:calico-system,Attempt:0,}" Jul 11 00:04:54.129797 containerd[1432]: time="2025-07-11T00:04:54.129697551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:04:54.129915 containerd[1432]: time="2025-07-11T00:04:54.129844121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:04:54.129961 containerd[1432]: time="2025-07-11T00:04:54.129910685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:54.130115 containerd[1432]: time="2025-07-11T00:04:54.130047774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:04:54.145036 systemd[1]: Started cri-containerd-951bc0cd02e88c4378acf994e6ec857aa0c0213bd6a17c342ad71429604dddb7.scope - libcontainer container 951bc0cd02e88c4378acf994e6ec857aa0c0213bd6a17c342ad71429604dddb7. Jul 11 00:04:54.173141 containerd[1432]: time="2025-07-11T00:04:54.173082836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d488d4687-lln96,Uid:f0f514cb-4235-481c-9d0f-731b37f47a86,Namespace:calico-system,Attempt:0,} returns sandbox id \"951bc0cd02e88c4378acf994e6ec857aa0c0213bd6a17c342ad71429604dddb7\"" Jul 11 00:04:54.174121 kubelet[2472]: E0711 00:04:54.174089 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:54.882754 kubelet[2472]: E0711 00:04:54.882693 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8t882" podUID="69b81cc5-c8b0-45b3-aeb9-88292bebdc48" Jul 11 00:04:54.930251 containerd[1432]: time="2025-07-11T00:04:54.930198959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:54.930744 containerd[1432]: time="2025-07-11T00:04:54.930710552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 11 00:04:54.931283 containerd[1432]: time="2025-07-11T00:04:54.931257867Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:54.944263 containerd[1432]: time="2025-07-11T00:04:54.944222533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:54.947989 containerd[1432]: time="2025-07-11T00:04:54.947946890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.036817693s" Jul 11 00:04:54.948239 containerd[1432]: time="2025-07-11T00:04:54.948088899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 11 00:04:54.950839 containerd[1432]: time="2025-07-11T00:04:54.950811233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 00:04:54.953579 containerd[1432]: time="2025-07-11T00:04:54.953550247Z" level=info msg="CreateContainer within sandbox \"c3e49f35ac8e5088255d07923ad188d00931c3d1cad85402dc90f91c36e759be\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 00:04:54.966199 containerd[1432]: time="2025-07-11T00:04:54.966091726Z" level=info msg="CreateContainer within sandbox \"c3e49f35ac8e5088255d07923ad188d00931c3d1cad85402dc90f91c36e759be\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3a03f04716b5609a899765e233a16753398a37a2a23c0ba4cea38c6c67059997\"" Jul 11 00:04:54.967419 containerd[1432]: time="2025-07-11T00:04:54.967370968Z" level=info msg="StartContainer for \"3a03f04716b5609a899765e233a16753398a37a2a23c0ba4cea38c6c67059997\"" Jul 11 00:04:55.007053 systemd[1]: Started cri-containerd-3a03f04716b5609a899765e233a16753398a37a2a23c0ba4cea38c6c67059997.scope - libcontainer container 3a03f04716b5609a899765e233a16753398a37a2a23c0ba4cea38c6c67059997. Jul 11 00:04:55.048555 containerd[1432]: time="2025-07-11T00:04:55.048492804Z" level=info msg="StartContainer for \"3a03f04716b5609a899765e233a16753398a37a2a23c0ba4cea38c6c67059997\" returns successfully" Jul 11 00:04:55.075113 systemd[1]: cri-containerd-3a03f04716b5609a899765e233a16753398a37a2a23c0ba4cea38c6c67059997.scope: Deactivated successfully. Jul 11 00:04:55.109024 containerd[1432]: time="2025-07-11T00:04:55.102131112Z" level=info msg="shim disconnected" id=3a03f04716b5609a899765e233a16753398a37a2a23c0ba4cea38c6c67059997 namespace=k8s.io Jul 11 00:04:55.109024 containerd[1432]: time="2025-07-11T00:04:55.109021932Z" level=warning msg="cleaning up after shim disconnected" id=3a03f04716b5609a899765e233a16753398a37a2a23c0ba4cea38c6c67059997 namespace=k8s.io Jul 11 00:04:55.109233 containerd[1432]: time="2025-07-11T00:04:55.109036573Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:04:55.899673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a03f04716b5609a899765e233a16753398a37a2a23c0ba4cea38c6c67059997-rootfs.mount: Deactivated successfully. Jul 11 00:04:56.367748 containerd[1432]: time="2025-07-11T00:04:56.367687258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:56.368219 containerd[1432]: time="2025-07-11T00:04:56.368182087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Jul 11 00:04:56.369106 containerd[1432]: time="2025-07-11T00:04:56.369071579Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:56.371835 containerd[1432]: time="2025-07-11T00:04:56.371770816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:56.373013 containerd[1432]: time="2025-07-11T00:04:56.372980927Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.422131292s" Jul 11 00:04:56.373063 containerd[1432]: time="2025-07-11T00:04:56.373015009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 11 00:04:56.374192 containerd[1432]: time="2025-07-11T00:04:56.374165236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 00:04:56.385203 containerd[1432]: time="2025-07-11T00:04:56.384212702Z" level=info msg="CreateContainer within sandbox \"951bc0cd02e88c4378acf994e6ec857aa0c0213bd6a17c342ad71429604dddb7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 00:04:56.402924 containerd[1432]: time="2025-07-11T00:04:56.402865709Z" level=info msg="CreateContainer within sandbox \"951bc0cd02e88c4378acf994e6ec857aa0c0213bd6a17c342ad71429604dddb7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"62271aebfbb1f8920d62fe751f5d1ad9b0fbde3146a604a3b8e3eb03d45d188f\"" Jul 11 00:04:56.403530 containerd[1432]: time="2025-07-11T00:04:56.403366219Z" level=info msg="StartContainer for \"62271aebfbb1f8920d62fe751f5d1ad9b0fbde3146a604a3b8e3eb03d45d188f\"" Jul 11 00:04:56.429046 systemd[1]: Started cri-containerd-62271aebfbb1f8920d62fe751f5d1ad9b0fbde3146a604a3b8e3eb03d45d188f.scope - libcontainer container 62271aebfbb1f8920d62fe751f5d1ad9b0fbde3146a604a3b8e3eb03d45d188f. Jul 11 00:04:56.461798 containerd[1432]: time="2025-07-11T00:04:56.461755663Z" level=info msg="StartContainer for \"62271aebfbb1f8920d62fe751f5d1ad9b0fbde3146a604a3b8e3eb03d45d188f\" returns successfully" Jul 11 00:04:56.882410 kubelet[2472]: E0711 00:04:56.882354 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8t882" podUID="69b81cc5-c8b0-45b3-aeb9-88292bebdc48" Jul 11 00:04:56.950903 kubelet[2472]: E0711 00:04:56.950263 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:56.980782 kubelet[2472]: I0711 00:04:56.980699 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d488d4687-lln96" podStartSLOduration=2.7817864979999998 podStartE2EDuration="4.980680761s" podCreationTimestamp="2025-07-11 00:04:52 +0000 UTC" firstStartedPulling="2025-07-11 00:04:54.174820547 +0000 UTC m=+22.387625293" lastFinishedPulling="2025-07-11 00:04:56.37371481 +0000 UTC m=+24.586519556" observedRunningTime="2025-07-11 00:04:56.975820798 +0000 UTC m=+25.188625544" watchObservedRunningTime="2025-07-11 00:04:56.980680761 +0000 UTC m=+25.193485467" Jul 11 00:04:57.951410 kubelet[2472]: I0711 00:04:57.951375 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:04:57.951920 kubelet[2472]: E0711 00:04:57.951804 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:58.544320 containerd[1432]: time="2025-07-11T00:04:58.544268897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:58.544799 containerd[1432]: time="2025-07-11T00:04:58.544686759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 11 00:04:58.545477 containerd[1432]: time="2025-07-11T00:04:58.545449800Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:58.547456 containerd[1432]: time="2025-07-11T00:04:58.547424066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:04:58.548269 containerd[1432]: time="2025-07-11T00:04:58.548242829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.174040152s" Jul 11 00:04:58.548323 containerd[1432]: time="2025-07-11T00:04:58.548273991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 11 00:04:58.550810 containerd[1432]: time="2025-07-11T00:04:58.550772525Z" level=info msg="CreateContainer within sandbox \"c3e49f35ac8e5088255d07923ad188d00931c3d1cad85402dc90f91c36e759be\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 00:04:58.562462 containerd[1432]: time="2025-07-11T00:04:58.562417148Z" level=info msg="CreateContainer within sandbox \"c3e49f35ac8e5088255d07923ad188d00931c3d1cad85402dc90f91c36e759be\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c88dd79be33bdbaca56fbef41f3087f39e0b56c94668fedd40186603a6d75c35\"" Jul 11 00:04:58.563870 containerd[1432]: time="2025-07-11T00:04:58.562947777Z" level=info msg="StartContainer for \"c88dd79be33bdbaca56fbef41f3087f39e0b56c94668fedd40186603a6d75c35\"" Jul 11 00:04:58.591033 systemd[1]: Started cri-containerd-c88dd79be33bdbaca56fbef41f3087f39e0b56c94668fedd40186603a6d75c35.scope - libcontainer container c88dd79be33bdbaca56fbef41f3087f39e0b56c94668fedd40186603a6d75c35. Jul 11 00:04:58.623633 containerd[1432]: time="2025-07-11T00:04:58.621302462Z" level=info msg="StartContainer for \"c88dd79be33bdbaca56fbef41f3087f39e0b56c94668fedd40186603a6d75c35\" returns successfully" Jul 11 00:04:58.882378 kubelet[2472]: E0711 00:04:58.882318 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8t882" podUID="69b81cc5-c8b0-45b3-aeb9-88292bebdc48" Jul 11 00:04:59.274483 containerd[1432]: time="2025-07-11T00:04:59.274342327Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 00:04:59.276870 systemd[1]: cri-containerd-c88dd79be33bdbaca56fbef41f3087f39e0b56c94668fedd40186603a6d75c35.scope: Deactivated successfully. Jul 11 00:04:59.296270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c88dd79be33bdbaca56fbef41f3087f39e0b56c94668fedd40186603a6d75c35-rootfs.mount: Deactivated successfully. Jul 11 00:04:59.299273 containerd[1432]: time="2025-07-11T00:04:59.299218645Z" level=info msg="shim disconnected" id=c88dd79be33bdbaca56fbef41f3087f39e0b56c94668fedd40186603a6d75c35 namespace=k8s.io Jul 11 00:04:59.299611 containerd[1432]: time="2025-07-11T00:04:59.299442737Z" level=warning msg="cleaning up after shim disconnected" id=c88dd79be33bdbaca56fbef41f3087f39e0b56c94668fedd40186603a6d75c35 namespace=k8s.io Jul 11 00:04:59.299611 containerd[1432]: time="2025-07-11T00:04:59.299459058Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:04:59.309309 containerd[1432]: time="2025-07-11T00:04:59.309265842Z" level=warning msg="cleanup warnings time=\"2025-07-11T00:04:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 11 00:04:59.370614 kubelet[2472]: I0711 00:04:59.370577 2472 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 11 00:04:59.413559 systemd[1]: Created slice kubepods-besteffort-podf2557641_a3f6_4d30_9fc0_458168133b25.slice - libcontainer container kubepods-besteffort-podf2557641_a3f6_4d30_9fc0_458168133b25.slice. Jul 11 00:04:59.421722 systemd[1]: Created slice kubepods-besteffort-pode57c9b88_19af_4e41_ab80_5de76c7ad975.slice - libcontainer container kubepods-besteffort-pode57c9b88_19af_4e41_ab80_5de76c7ad975.slice. Jul 11 00:04:59.430454 systemd[1]: Created slice kubepods-burstable-pod78d5efbc_6d02_414b_bb0f_acc8122968c7.slice - libcontainer container kubepods-burstable-pod78d5efbc_6d02_414b_bb0f_acc8122968c7.slice. Jul 11 00:04:59.439355 systemd[1]: Created slice kubepods-burstable-pod2ad5e86d_f0d2_48a9_bf45_69b8bfcd24a6.slice - libcontainer container kubepods-burstable-pod2ad5e86d_f0d2_48a9_bf45_69b8bfcd24a6.slice. Jul 11 00:04:59.443598 systemd[1]: Created slice kubepods-besteffort-podd36dbd41_8dff_4eb6_a8f2_69b019debb74.slice - libcontainer container kubepods-besteffort-podd36dbd41_8dff_4eb6_a8f2_69b019debb74.slice. Jul 11 00:04:59.449503 systemd[1]: Created slice kubepods-besteffort-pod26cbee42_8511_493f_84f7_7ccd7c1cf1b4.slice - libcontainer container kubepods-besteffort-pod26cbee42_8511_493f_84f7_7ccd7c1cf1b4.slice. Jul 11 00:04:59.456370 systemd[1]: Created slice kubepods-besteffort-pod1a8f323e_1163_4dff_a9f9_a0583b72a07e.slice - libcontainer container kubepods-besteffort-pod1a8f323e_1163_4dff_a9f9_a0583b72a07e.slice. Jul 11 00:04:59.461808 systemd[1]: Created slice kubepods-besteffort-poddbf96476_6021_46fe_b842_fd862c7996c8.slice - libcontainer container kubepods-besteffort-poddbf96476_6021_46fe_b842_fd862c7996c8.slice. Jul 11 00:04:59.575504 kubelet[2472]: I0711 00:04:59.574983 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/1a8f323e-1163-4dff-a9f9-a0583b72a07e-goldmane-key-pair\") pod \"goldmane-58fd7646b9-ts85z\" (UID: \"1a8f323e-1163-4dff-a9f9-a0583b72a07e\") " pod="calico-system/goldmane-58fd7646b9-ts85z" Jul 11 00:04:59.575504 kubelet[2472]: I0711 00:04:59.575029 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2557641-a3f6-4d30-9fc0-458168133b25-tigera-ca-bundle\") pod \"calico-kube-controllers-7688b6944f-z5wj7\" (UID: \"f2557641-a3f6-4d30-9fc0-458168133b25\") " pod="calico-system/calico-kube-controllers-7688b6944f-z5wj7" Jul 11 00:04:59.575504 kubelet[2472]: I0711 00:04:59.575052 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/78d5efbc-6d02-414b-bb0f-acc8122968c7-config-volume\") pod \"coredns-7c65d6cfc9-fj8dx\" (UID: \"78d5efbc-6d02-414b-bb0f-acc8122968c7\") " pod="kube-system/coredns-7c65d6cfc9-fj8dx" Jul 11 00:04:59.575504 kubelet[2472]: I0711 00:04:59.575071 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-whisker-ca-bundle\") pod \"whisker-64fb989db4-7nm8p\" (UID: \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\") " pod="calico-system/whisker-64fb989db4-7nm8p" Jul 11 00:04:59.575504 kubelet[2472]: I0711 00:04:59.575118 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbf96476-6021-46fe-b842-fd862c7996c8-calico-apiserver-certs\") pod \"calico-apiserver-bcf45dd9c-dssv7\" (UID: \"dbf96476-6021-46fe-b842-fd862c7996c8\") " pod="calico-apiserver/calico-apiserver-bcf45dd9c-dssv7" Jul 11 00:04:59.575715 kubelet[2472]: I0711 00:04:59.575152 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6smk\" (UniqueName: \"kubernetes.io/projected/f2557641-a3f6-4d30-9fc0-458168133b25-kube-api-access-d6smk\") pod \"calico-kube-controllers-7688b6944f-z5wj7\" (UID: \"f2557641-a3f6-4d30-9fc0-458168133b25\") " pod="calico-system/calico-kube-controllers-7688b6944f-z5wj7" Jul 11 00:04:59.575715 kubelet[2472]: I0711 00:04:59.575176 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a8f323e-1163-4dff-a9f9-a0583b72a07e-config\") pod \"goldmane-58fd7646b9-ts85z\" (UID: \"1a8f323e-1163-4dff-a9f9-a0583b72a07e\") " pod="calico-system/goldmane-58fd7646b9-ts85z" Jul 11 00:04:59.575715 kubelet[2472]: I0711 00:04:59.575198 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw456\" (UniqueName: \"kubernetes.io/projected/2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6-kube-api-access-hw456\") pod \"coredns-7c65d6cfc9-t59gl\" (UID: \"2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6\") " pod="kube-system/coredns-7c65d6cfc9-t59gl" Jul 11 00:04:59.575715 kubelet[2472]: I0711 00:04:59.575214 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mc8d\" (UniqueName: \"kubernetes.io/projected/78d5efbc-6d02-414b-bb0f-acc8122968c7-kube-api-access-4mc8d\") pod \"coredns-7c65d6cfc9-fj8dx\" (UID: \"78d5efbc-6d02-414b-bb0f-acc8122968c7\") " pod="kube-system/coredns-7c65d6cfc9-fj8dx" Jul 11 00:04:59.575715 kubelet[2472]: I0711 00:04:59.575230 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-whisker-backend-key-pair\") pod \"whisker-64fb989db4-7nm8p\" (UID: \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\") " pod="calico-system/whisker-64fb989db4-7nm8p" Jul 11 00:04:59.575825 kubelet[2472]: I0711 00:04:59.575265 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnvmn\" (UniqueName: \"kubernetes.io/projected/1a8f323e-1163-4dff-a9f9-a0583b72a07e-kube-api-access-dnvmn\") pod \"goldmane-58fd7646b9-ts85z\" (UID: \"1a8f323e-1163-4dff-a9f9-a0583b72a07e\") " pod="calico-system/goldmane-58fd7646b9-ts85z" Jul 11 00:04:59.575825 kubelet[2472]: I0711 00:04:59.575294 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e57c9b88-19af-4e41-ab80-5de76c7ad975-calico-apiserver-certs\") pod \"calico-apiserver-bcf45dd9c-fb758\" (UID: \"e57c9b88-19af-4e41-ab80-5de76c7ad975\") " pod="calico-apiserver/calico-apiserver-bcf45dd9c-fb758" Jul 11 00:04:59.575825 kubelet[2472]: I0711 00:04:59.575312 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfhmj\" (UniqueName: \"kubernetes.io/projected/e57c9b88-19af-4e41-ab80-5de76c7ad975-kube-api-access-wfhmj\") pod \"calico-apiserver-bcf45dd9c-fb758\" (UID: \"e57c9b88-19af-4e41-ab80-5de76c7ad975\") " pod="calico-apiserver/calico-apiserver-bcf45dd9c-fb758" Jul 11 00:04:59.575825 kubelet[2472]: I0711 00:04:59.575344 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj28m\" (UniqueName: \"kubernetes.io/projected/d36dbd41-8dff-4eb6-a8f2-69b019debb74-kube-api-access-dj28m\") pod \"calico-apiserver-567bf94b46-mxmnt\" (UID: \"d36dbd41-8dff-4eb6-a8f2-69b019debb74\") " pod="calico-apiserver/calico-apiserver-567bf94b46-mxmnt" Jul 11 00:04:59.575825 kubelet[2472]: I0711 00:04:59.575361 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d36dbd41-8dff-4eb6-a8f2-69b019debb74-calico-apiserver-certs\") pod \"calico-apiserver-567bf94b46-mxmnt\" (UID: \"d36dbd41-8dff-4eb6-a8f2-69b019debb74\") " pod="calico-apiserver/calico-apiserver-567bf94b46-mxmnt" Jul 11 00:04:59.576009 kubelet[2472]: I0711 00:04:59.575379 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6-config-volume\") pod \"coredns-7c65d6cfc9-t59gl\" (UID: \"2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6\") " pod="kube-system/coredns-7c65d6cfc9-t59gl" Jul 11 00:04:59.576009 kubelet[2472]: I0711 00:04:59.575400 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l8kd\" (UniqueName: \"kubernetes.io/projected/dbf96476-6021-46fe-b842-fd862c7996c8-kube-api-access-4l8kd\") pod \"calico-apiserver-bcf45dd9c-dssv7\" (UID: \"dbf96476-6021-46fe-b842-fd862c7996c8\") " pod="calico-apiserver/calico-apiserver-bcf45dd9c-dssv7" Jul 11 00:04:59.576009 kubelet[2472]: I0711 00:04:59.575423 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsklz\" (UniqueName: \"kubernetes.io/projected/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-kube-api-access-fsklz\") pod \"whisker-64fb989db4-7nm8p\" (UID: \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\") " pod="calico-system/whisker-64fb989db4-7nm8p" Jul 11 00:04:59.576009 kubelet[2472]: I0711 00:04:59.575439 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a8f323e-1163-4dff-a9f9-a0583b72a07e-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-ts85z\" (UID: \"1a8f323e-1163-4dff-a9f9-a0583b72a07e\") " pod="calico-system/goldmane-58fd7646b9-ts85z" Jul 11 00:04:59.720890 containerd[1432]: time="2025-07-11T00:04:59.720829555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7688b6944f-z5wj7,Uid:f2557641-a3f6-4d30-9fc0-458168133b25,Namespace:calico-system,Attempt:0,}" Jul 11 00:04:59.727562 containerd[1432]: time="2025-07-11T00:04:59.727517499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcf45dd9c-fb758,Uid:e57c9b88-19af-4e41-ab80-5de76c7ad975,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:04:59.734193 kubelet[2472]: E0711 00:04:59.734161 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:59.734597 containerd[1432]: time="2025-07-11T00:04:59.734566701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fj8dx,Uid:78d5efbc-6d02-414b-bb0f-acc8122968c7,Namespace:kube-system,Attempt:0,}" Jul 11 00:04:59.744099 kubelet[2472]: E0711 00:04:59.744053 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:04:59.745555 containerd[1432]: time="2025-07-11T00:04:59.745429419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t59gl,Uid:2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6,Namespace:kube-system,Attempt:0,}" Jul 11 00:04:59.746670 containerd[1432]: time="2025-07-11T00:04:59.746640241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567bf94b46-mxmnt,Uid:d36dbd41-8dff-4eb6-a8f2-69b019debb74,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:04:59.755061 containerd[1432]: time="2025-07-11T00:04:59.754735138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64fb989db4-7nm8p,Uid:26cbee42-8511-493f-84f7-7ccd7c1cf1b4,Namespace:calico-system,Attempt:0,}" Jul 11 00:04:59.775767 containerd[1432]: time="2025-07-11T00:04:59.772485130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcf45dd9c-dssv7,Uid:dbf96476-6021-46fe-b842-fd862c7996c8,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:04:59.775767 containerd[1432]: time="2025-07-11T00:04:59.772724862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ts85z,Uid:1a8f323e-1163-4dff-a9f9-a0583b72a07e,Namespace:calico-system,Attempt:0,}" Jul 11 00:04:59.966671 containerd[1432]: time="2025-07-11T00:04:59.964006813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 00:05:00.282145 containerd[1432]: time="2025-07-11T00:05:00.281030183Z" level=error msg="Failed to destroy network for sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.283002 containerd[1432]: time="2025-07-11T00:05:00.282957758Z" level=error msg="encountered an error cleaning up failed sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.283080 containerd[1432]: time="2025-07-11T00:05:00.283045042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ts85z,Uid:1a8f323e-1163-4dff-a9f9-a0583b72a07e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.284842 kubelet[2472]: E0711 00:05:00.284796 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.287682 kubelet[2472]: E0711 00:05:00.287627 2472 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ts85z" Jul 11 00:05:00.287682 kubelet[2472]: E0711 00:05:00.287679 2472 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ts85z" Jul 11 00:05:00.287793 kubelet[2472]: E0711 00:05:00.287737 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-ts85z_calico-system(1a8f323e-1163-4dff-a9f9-a0583b72a07e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-ts85z_calico-system(1a8f323e-1163-4dff-a9f9-a0583b72a07e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ts85z" podUID="1a8f323e-1163-4dff-a9f9-a0583b72a07e" Jul 11 00:05:00.291716 containerd[1432]: time="2025-07-11T00:05:00.291672868Z" level=error msg="Failed to destroy network for sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.292225 containerd[1432]: time="2025-07-11T00:05:00.292188734Z" level=error msg="encountered an error cleaning up failed sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.292289 containerd[1432]: time="2025-07-11T00:05:00.292260417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fj8dx,Uid:78d5efbc-6d02-414b-bb0f-acc8122968c7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.292545 kubelet[2472]: E0711 00:05:00.292504 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.292594 kubelet[2472]: E0711 00:05:00.292558 2472 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fj8dx" Jul 11 00:05:00.292594 kubelet[2472]: E0711 00:05:00.292580 2472 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-fj8dx" Jul 11 00:05:00.292644 kubelet[2472]: E0711 00:05:00.292614 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-fj8dx_kube-system(78d5efbc-6d02-414b-bb0f-acc8122968c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-fj8dx_kube-system(78d5efbc-6d02-414b-bb0f-acc8122968c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fj8dx" podUID="78d5efbc-6d02-414b-bb0f-acc8122968c7" Jul 11 00:05:00.297324 containerd[1432]: time="2025-07-11T00:05:00.297286145Z" level=error msg="Failed to destroy network for sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.297529 containerd[1432]: time="2025-07-11T00:05:00.297294066Z" level=error msg="Failed to destroy network for sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.297668 containerd[1432]: time="2025-07-11T00:05:00.297604321Z" level=error msg="encountered an error cleaning up failed sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.297707 containerd[1432]: time="2025-07-11T00:05:00.297682885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t59gl,Uid:2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.297885 containerd[1432]: time="2025-07-11T00:05:00.297844333Z" level=error msg="encountered an error cleaning up failed sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.298003 kubelet[2472]: E0711 00:05:00.297879 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.298003 kubelet[2472]: E0711 00:05:00.297928 2472 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t59gl" Jul 11 00:05:00.298003 kubelet[2472]: E0711 00:05:00.297961 2472 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t59gl" Jul 11 00:05:00.298092 kubelet[2472]: E0711 00:05:00.298026 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-t59gl_kube-system(2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-t59gl_kube-system(2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t59gl" podUID="2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6" Jul 11 00:05:00.298893 containerd[1432]: time="2025-07-11T00:05:00.298205031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64fb989db4-7nm8p,Uid:26cbee42-8511-493f-84f7-7ccd7c1cf1b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.298893 containerd[1432]: time="2025-07-11T00:05:00.298017861Z" level=error msg="Failed to destroy network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.299020 kubelet[2472]: E0711 00:05:00.298930 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.299020 kubelet[2472]: E0711 00:05:00.298984 2472 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64fb989db4-7nm8p" Jul 11 00:05:00.299020 kubelet[2472]: E0711 00:05:00.299000 2472 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64fb989db4-7nm8p" Jul 11 00:05:00.299185 kubelet[2472]: E0711 00:05:00.299045 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64fb989db4-7nm8p_calico-system(26cbee42-8511-493f-84f7-7ccd7c1cf1b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64fb989db4-7nm8p_calico-system(26cbee42-8511-493f-84f7-7ccd7c1cf1b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64fb989db4-7nm8p" podUID="26cbee42-8511-493f-84f7-7ccd7c1cf1b4" Jul 11 00:05:00.301428 containerd[1432]: time="2025-07-11T00:05:00.301228420Z" level=error msg="Failed to destroy network for sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.301428 containerd[1432]: time="2025-07-11T00:05:00.301302664Z" level=error msg="encountered an error cleaning up failed sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.301428 containerd[1432]: time="2025-07-11T00:05:00.301356706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcf45dd9c-dssv7,Uid:dbf96476-6021-46fe-b842-fd862c7996c8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.301553 kubelet[2472]: E0711 00:05:00.301509 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.301593 kubelet[2472]: E0711 00:05:00.301556 2472 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bcf45dd9c-dssv7" Jul 11 00:05:00.301593 kubelet[2472]: E0711 00:05:00.301579 2472 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bcf45dd9c-dssv7" Jul 11 00:05:00.301638 kubelet[2472]: E0711 00:05:00.301609 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bcf45dd9c-dssv7_calico-apiserver(dbf96476-6021-46fe-b842-fd862c7996c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bcf45dd9c-dssv7_calico-apiserver(dbf96476-6021-46fe-b842-fd862c7996c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bcf45dd9c-dssv7" podUID="dbf96476-6021-46fe-b842-fd862c7996c8" Jul 11 00:05:00.301678 containerd[1432]: time="2025-07-11T00:05:00.301563717Z" level=error msg="encountered an error cleaning up failed sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.301678 containerd[1432]: time="2025-07-11T00:05:00.301647161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567bf94b46-mxmnt,Uid:d36dbd41-8dff-4eb6-a8f2-69b019debb74,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.301842 kubelet[2472]: E0711 00:05:00.301801 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.301901 kubelet[2472]: E0711 00:05:00.301869 2472 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567bf94b46-mxmnt" Jul 11 00:05:00.301901 kubelet[2472]: E0711 00:05:00.301888 2472 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-567bf94b46-mxmnt" Jul 11 00:05:00.301953 kubelet[2472]: E0711 00:05:00.301924 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-567bf94b46-mxmnt_calico-apiserver(d36dbd41-8dff-4eb6-a8f2-69b019debb74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-567bf94b46-mxmnt_calico-apiserver(d36dbd41-8dff-4eb6-a8f2-69b019debb74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567bf94b46-mxmnt" podUID="d36dbd41-8dff-4eb6-a8f2-69b019debb74" Jul 11 00:05:00.307023 containerd[1432]: time="2025-07-11T00:05:00.306979744Z" level=error msg="Failed to destroy network for sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.307155 containerd[1432]: time="2025-07-11T00:05:00.307135152Z" level=error msg="Failed to destroy network for sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.307398 containerd[1432]: time="2025-07-11T00:05:00.307374323Z" level=error msg="encountered an error cleaning up failed sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.307438 containerd[1432]: time="2025-07-11T00:05:00.307419086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7688b6944f-z5wj7,Uid:f2557641-a3f6-4d30-9fc0-458168133b25,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.307651 kubelet[2472]: E0711 00:05:00.307595 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.307728 kubelet[2472]: E0711 00:05:00.307667 2472 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7688b6944f-z5wj7" Jul 11 00:05:00.307756 kubelet[2472]: E0711 00:05:00.307734 2472 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7688b6944f-z5wj7" Jul 11 00:05:00.307809 kubelet[2472]: E0711 00:05:00.307775 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7688b6944f-z5wj7_calico-system(f2557641-a3f6-4d30-9fc0-458168133b25)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7688b6944f-z5wj7_calico-system(f2557641-a3f6-4d30-9fc0-458168133b25)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7688b6944f-z5wj7" podUID="f2557641-a3f6-4d30-9fc0-458168133b25" Jul 11 00:05:00.308222 containerd[1432]: time="2025-07-11T00:05:00.308183043Z" level=error msg="encountered an error cleaning up failed sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.308281 containerd[1432]: time="2025-07-11T00:05:00.308234566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcf45dd9c-fb758,Uid:e57c9b88-19af-4e41-ab80-5de76c7ad975,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.308399 kubelet[2472]: E0711 00:05:00.308374 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.308434 kubelet[2472]: E0711 00:05:00.308409 2472 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bcf45dd9c-fb758" Jul 11 00:05:00.308434 kubelet[2472]: E0711 00:05:00.308427 2472 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bcf45dd9c-fb758" Jul 11 00:05:00.308485 kubelet[2472]: E0711 00:05:00.308469 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bcf45dd9c-fb758_calico-apiserver(e57c9b88-19af-4e41-ab80-5de76c7ad975)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bcf45dd9c-fb758_calico-apiserver(e57c9b88-19af-4e41-ab80-5de76c7ad975)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bcf45dd9c-fb758" podUID="e57c9b88-19af-4e41-ab80-5de76c7ad975" Jul 11 00:05:00.908862 systemd[1]: Created slice kubepods-besteffort-pod69b81cc5_c8b0_45b3_aeb9_88292bebdc48.slice - libcontainer container kubepods-besteffort-pod69b81cc5_c8b0_45b3_aeb9_88292bebdc48.slice. Jul 11 00:05:00.912997 containerd[1432]: time="2025-07-11T00:05:00.912935663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8t882,Uid:69b81cc5-c8b0-45b3-aeb9-88292bebdc48,Namespace:calico-system,Attempt:0,}" Jul 11 00:05:00.964987 kubelet[2472]: I0711 00:05:00.964943 2472 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:00.966307 containerd[1432]: time="2025-07-11T00:05:00.966028884Z" level=info msg="StopPodSandbox for \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\"" Jul 11 00:05:00.966307 containerd[1432]: time="2025-07-11T00:05:00.966196612Z" level=info msg="Ensure that sandbox 0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf in task-service has been cleanup successfully" Jul 11 00:05:00.967647 kubelet[2472]: I0711 00:05:00.967597 2472 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:00.968904 containerd[1432]: time="2025-07-11T00:05:00.968876505Z" level=info msg="StopPodSandbox for \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\"" Jul 11 00:05:00.969709 kubelet[2472]: I0711 00:05:00.969639 2472 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:00.971149 containerd[1432]: time="2025-07-11T00:05:00.969683704Z" level=info msg="Ensure that sandbox 9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3 in task-service has been cleanup successfully" Jul 11 00:05:00.972368 containerd[1432]: time="2025-07-11T00:05:00.970303455Z" level=info msg="StopPodSandbox for \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\"" Jul 11 00:05:00.972579 containerd[1432]: time="2025-07-11T00:05:00.972554166Z" level=info msg="Ensure that sandbox 1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b in task-service has been cleanup successfully" Jul 11 00:05:00.974022 kubelet[2472]: I0711 00:05:00.973946 2472 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:00.974487 containerd[1432]: time="2025-07-11T00:05:00.974457140Z" level=info msg="StopPodSandbox for \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\"" Jul 11 00:05:00.974626 containerd[1432]: time="2025-07-11T00:05:00.974603587Z" level=info msg="Ensure that sandbox 9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e in task-service has been cleanup successfully" Jul 11 00:05:00.975801 kubelet[2472]: I0711 00:05:00.975729 2472 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:00.976665 containerd[1432]: time="2025-07-11T00:05:00.976631327Z" level=info msg="StopPodSandbox for \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\"" Jul 11 00:05:00.976910 containerd[1432]: time="2025-07-11T00:05:00.976776935Z" level=info msg="Ensure that sandbox 9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159 in task-service has been cleanup successfully" Jul 11 00:05:00.977510 kubelet[2472]: I0711 00:05:00.977435 2472 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:00.978340 containerd[1432]: time="2025-07-11T00:05:00.978267888Z" level=info msg="StopPodSandbox for \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\"" Jul 11 00:05:00.978825 containerd[1432]: time="2025-07-11T00:05:00.978801275Z" level=info msg="Ensure that sandbox 5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf in task-service has been cleanup successfully" Jul 11 00:05:00.983681 kubelet[2472]: I0711 00:05:00.983567 2472 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:00.984571 containerd[1432]: time="2025-07-11T00:05:00.984524437Z" level=info msg="StopPodSandbox for \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\"" Jul 11 00:05:00.984708 containerd[1432]: time="2025-07-11T00:05:00.984684965Z" level=info msg="Ensure that sandbox f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198 in task-service has been cleanup successfully" Jul 11 00:05:00.990517 kubelet[2472]: I0711 00:05:00.990411 2472 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:00.991428 containerd[1432]: time="2025-07-11T00:05:00.991350934Z" level=info msg="StopPodSandbox for \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\"" Jul 11 00:05:00.993390 containerd[1432]: time="2025-07-11T00:05:00.993361914Z" level=info msg="Ensure that sandbox 9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2 in task-service has been cleanup successfully" Jul 11 00:05:00.996170 containerd[1432]: time="2025-07-11T00:05:00.996030885Z" level=error msg="Failed to destroy network for sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:00.998183 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2-shm.mount: Deactivated successfully. Jul 11 00:05:01.035765 containerd[1432]: time="2025-07-11T00:05:01.035338481Z" level=error msg="StopPodSandbox for \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\" failed" error="failed to destroy network for sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.035936 kubelet[2472]: E0711 00:05:01.035578 2472 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:01.035936 kubelet[2472]: E0711 00:05:01.035640 2472 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e"} Jul 11 00:05:01.035936 kubelet[2472]: E0711 00:05:01.035701 2472 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a8f323e-1163-4dff-a9f9-a0583b72a07e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:05:01.035936 kubelet[2472]: E0711 00:05:01.035723 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a8f323e-1163-4dff-a9f9-a0583b72a07e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ts85z" podUID="1a8f323e-1163-4dff-a9f9-a0583b72a07e" Jul 11 00:05:01.036522 containerd[1432]: time="2025-07-11T00:05:01.036432733Z" level=error msg="encountered an error cleaning up failed sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.036522 containerd[1432]: time="2025-07-11T00:05:01.036491975Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8t882,Uid:69b81cc5-c8b0-45b3-aeb9-88292bebdc48,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.036750 kubelet[2472]: E0711 00:05:01.036626 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.037814 kubelet[2472]: E0711 00:05:01.036817 2472 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8t882" Jul 11 00:05:01.037814 kubelet[2472]: E0711 00:05:01.036843 2472 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8t882" Jul 11 00:05:01.037814 kubelet[2472]: E0711 00:05:01.036892 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8t882_calico-system(69b81cc5-c8b0-45b3-aeb9-88292bebdc48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8t882_calico-system(69b81cc5-c8b0-45b3-aeb9-88292bebdc48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8t882" podUID="69b81cc5-c8b0-45b3-aeb9-88292bebdc48" Jul 11 00:05:01.047054 containerd[1432]: time="2025-07-11T00:05:01.047004195Z" level=error msg="StopPodSandbox for \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\" failed" error="failed to destroy network for sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.047389 kubelet[2472]: E0711 00:05:01.047356 2472 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:01.047629 kubelet[2472]: E0711 00:05:01.047527 2472 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf"} Jul 11 00:05:01.047776 kubelet[2472]: E0711 00:05:01.047569 2472 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2557641-a3f6-4d30-9fc0-458168133b25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:05:01.047776 kubelet[2472]: E0711 00:05:01.047745 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2557641-a3f6-4d30-9fc0-458168133b25\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7688b6944f-z5wj7" podUID="f2557641-a3f6-4d30-9fc0-458168133b25" Jul 11 00:05:01.053752 containerd[1432]: time="2025-07-11T00:05:01.053420019Z" level=error msg="StopPodSandbox for \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\" failed" error="failed to destroy network for sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.053840 containerd[1432]: time="2025-07-11T00:05:01.053806358Z" level=error msg="StopPodSandbox for \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\" failed" error="failed to destroy network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.054312 kubelet[2472]: E0711 00:05:01.054049 2472 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:01.054312 kubelet[2472]: E0711 00:05:01.054093 2472 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3"} Jul 11 00:05:01.054312 kubelet[2472]: E0711 00:05:01.054127 2472 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbf96476-6021-46fe-b842-fd862c7996c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:05:01.054312 kubelet[2472]: E0711 00:05:01.054147 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbf96476-6021-46fe-b842-fd862c7996c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bcf45dd9c-dssv7" podUID="dbf96476-6021-46fe-b842-fd862c7996c8" Jul 11 00:05:01.054554 kubelet[2472]: E0711 00:05:01.054182 2472 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:01.054554 kubelet[2472]: E0711 00:05:01.054210 2472 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf"} Jul 11 00:05:01.054554 kubelet[2472]: E0711 00:05:01.054227 2472 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e57c9b88-19af-4e41-ab80-5de76c7ad975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:05:01.054554 kubelet[2472]: E0711 00:05:01.054254 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e57c9b88-19af-4e41-ab80-5de76c7ad975\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bcf45dd9c-fb758" podUID="e57c9b88-19af-4e41-ab80-5de76c7ad975" Jul 11 00:05:01.056199 containerd[1432]: time="2025-07-11T00:05:01.056164469Z" level=error msg="StopPodSandbox for \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\" failed" error="failed to destroy network for sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.056480 kubelet[2472]: E0711 00:05:01.056443 2472 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:01.056554 containerd[1432]: time="2025-07-11T00:05:01.056521726Z" level=error msg="StopPodSandbox for \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\" failed" error="failed to destroy network for sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.056863 kubelet[2472]: E0711 00:05:01.056807 2472 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2"} Jul 11 00:05:01.057028 kubelet[2472]: E0711 00:05:01.056743 2472 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:01.057028 kubelet[2472]: E0711 00:05:01.056949 2472 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198"} Jul 11 00:05:01.057028 kubelet[2472]: E0711 00:05:01.056972 2472 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:05:01.057028 kubelet[2472]: E0711 00:05:01.057002 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64fb989db4-7nm8p" podUID="26cbee42-8511-493f-84f7-7ccd7c1cf1b4" Jul 11 00:05:01.057328 kubelet[2472]: E0711 00:05:01.056843 2472 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d36dbd41-8dff-4eb6-a8f2-69b019debb74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:05:01.057328 kubelet[2472]: E0711 00:05:01.057297 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d36dbd41-8dff-4eb6-a8f2-69b019debb74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-567bf94b46-mxmnt" podUID="d36dbd41-8dff-4eb6-a8f2-69b019debb74" Jul 11 00:05:01.060595 containerd[1432]: time="2025-07-11T00:05:01.060554478Z" level=error msg="StopPodSandbox for \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\" failed" error="failed to destroy network for sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.060734 kubelet[2472]: E0711 00:05:01.060701 2472 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:01.060775 kubelet[2472]: E0711 00:05:01.060736 2472 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b"} Jul 11 00:05:01.060775 kubelet[2472]: E0711 00:05:01.060763 2472 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78d5efbc-6d02-414b-bb0f-acc8122968c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:05:01.060861 kubelet[2472]: E0711 00:05:01.060781 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78d5efbc-6d02-414b-bb0f-acc8122968c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-fj8dx" podUID="78d5efbc-6d02-414b-bb0f-acc8122968c7" Jul 11 00:05:01.063268 containerd[1432]: time="2025-07-11T00:05:01.063001314Z" level=error msg="StopPodSandbox for \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\" failed" error="failed to destroy network for sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:01.063341 kubelet[2472]: E0711 00:05:01.063161 2472 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:01.063341 kubelet[2472]: E0711 00:05:01.063193 2472 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159"} Jul 11 00:05:01.063341 kubelet[2472]: E0711 00:05:01.063224 2472 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:05:01.063341 kubelet[2472]: E0711 00:05:01.063241 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t59gl" podUID="2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6" Jul 11 00:05:01.992580 kubelet[2472]: I0711 00:05:01.992546 2472 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:01.993925 containerd[1432]: time="2025-07-11T00:05:01.993887271Z" level=info msg="StopPodSandbox for \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\"" Jul 11 00:05:01.994175 containerd[1432]: time="2025-07-11T00:05:01.994061399Z" level=info msg="Ensure that sandbox fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2 in task-service has been cleanup successfully" Jul 11 00:05:02.016153 containerd[1432]: time="2025-07-11T00:05:02.015743602Z" level=error msg="StopPodSandbox for \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\" failed" error="failed to destroy network for sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 00:05:02.016278 kubelet[2472]: E0711 00:05:02.015997 2472 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:02.016278 kubelet[2472]: E0711 00:05:02.016052 2472 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2"} Jul 11 00:05:02.016278 kubelet[2472]: E0711 00:05:02.016089 2472 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69b81cc5-c8b0-45b3-aeb9-88292bebdc48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 11 00:05:02.016278 kubelet[2472]: E0711 00:05:02.016110 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69b81cc5-c8b0-45b3-aeb9-88292bebdc48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8t882" podUID="69b81cc5-c8b0-45b3-aeb9-88292bebdc48" Jul 11 00:05:04.088945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4123822953.mount: Deactivated successfully. Jul 11 00:05:04.435451 containerd[1432]: time="2025-07-11T00:05:04.435324190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:04.436007 containerd[1432]: time="2025-07-11T00:05:04.435956697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 11 00:05:04.440122 containerd[1432]: time="2025-07-11T00:05:04.440081272Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:04.443184 containerd[1432]: time="2025-07-11T00:05:04.443122281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:04.443948 containerd[1432]: time="2025-07-11T00:05:04.443636423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.479583287s" Jul 11 00:05:04.443948 containerd[1432]: time="2025-07-11T00:05:04.443671424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 11 00:05:04.455796 containerd[1432]: time="2025-07-11T00:05:04.455667734Z" level=info msg="CreateContainer within sandbox \"c3e49f35ac8e5088255d07923ad188d00931c3d1cad85402dc90f91c36e759be\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 00:05:04.471314 containerd[1432]: time="2025-07-11T00:05:04.471263556Z" level=info msg="CreateContainer within sandbox \"c3e49f35ac8e5088255d07923ad188d00931c3d1cad85402dc90f91c36e759be\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c0b88155271ac76576ec2b1ef97d09a5b352bb5db81d47710a042af9fd4c4049\"" Jul 11 00:05:04.472098 containerd[1432]: time="2025-07-11T00:05:04.471820020Z" level=info msg="StartContainer for \"c0b88155271ac76576ec2b1ef97d09a5b352bb5db81d47710a042af9fd4c4049\"" Jul 11 00:05:04.540112 systemd[1]: Started cri-containerd-c0b88155271ac76576ec2b1ef97d09a5b352bb5db81d47710a042af9fd4c4049.scope - libcontainer container c0b88155271ac76576ec2b1ef97d09a5b352bb5db81d47710a042af9fd4c4049. Jul 11 00:05:04.570404 containerd[1432]: time="2025-07-11T00:05:04.570356085Z" level=info msg="StartContainer for \"c0b88155271ac76576ec2b1ef97d09a5b352bb5db81d47710a042af9fd4c4049\" returns successfully" Jul 11 00:05:04.828459 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 00:05:04.828662 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 00:05:04.942930 containerd[1432]: time="2025-07-11T00:05:04.942888267Z" level=info msg="StopPodSandbox for \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\"" Jul 11 00:05:05.034172 kubelet[2472]: I0711 00:05:05.034050 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r2d7q" podStartSLOduration=2.490260292 podStartE2EDuration="13.034029169s" podCreationTimestamp="2025-07-11 00:04:52 +0000 UTC" firstStartedPulling="2025-07-11 00:04:53.90068482 +0000 UTC m=+22.113489526" lastFinishedPulling="2025-07-11 00:05:04.444453657 +0000 UTC m=+32.657258403" observedRunningTime="2025-07-11 00:05:05.03111873 +0000 UTC m=+33.243923516" watchObservedRunningTime="2025-07-11 00:05:05.034029169 +0000 UTC m=+33.246833915" Jul 11 00:05:05.104193 systemd[1]: run-containerd-runc-k8s.io-c0b88155271ac76576ec2b1ef97d09a5b352bb5db81d47710a042af9fd4c4049-runc.q6a8aV.mount: Deactivated successfully. Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.100 [INFO][3857] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.106 [INFO][3857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" iface="eth0" netns="/var/run/netns/cni-f986cf02-cece-f557-b45f-e16016dabfa8" Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.108 [INFO][3857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" iface="eth0" netns="/var/run/netns/cni-f986cf02-cece-f557-b45f-e16016dabfa8" Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.109 [INFO][3857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" iface="eth0" netns="/var/run/netns/cni-f986cf02-cece-f557-b45f-e16016dabfa8" Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.109 [INFO][3857] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.109 [INFO][3857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.383 [INFO][3884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" HandleID="k8s-pod-network.f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Workload="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.383 [INFO][3884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.383 [INFO][3884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.392 [WARNING][3884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" HandleID="k8s-pod-network.f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Workload="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.392 [INFO][3884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" HandleID="k8s-pod-network.f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Workload="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.393 [INFO][3884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:05.397532 containerd[1432]: 2025-07-11 00:05:05.395 [INFO][3857] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:05.397963 containerd[1432]: time="2025-07-11T00:05:05.397690802Z" level=info msg="TearDown network for sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\" successfully" Jul 11 00:05:05.397963 containerd[1432]: time="2025-07-11T00:05:05.397724724Z" level=info msg="StopPodSandbox for \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\" returns successfully" Jul 11 00:05:05.399926 systemd[1]: run-netns-cni\x2df986cf02\x2dcece\x2df557\x2db45f\x2de16016dabfa8.mount: Deactivated successfully. Jul 11 00:05:05.521277 kubelet[2472]: I0711 00:05:05.521049 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-whisker-backend-key-pair\") pod \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\" (UID: \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\") " Jul 11 00:05:05.521545 kubelet[2472]: I0711 00:05:05.521492 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-whisker-ca-bundle\") pod \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\" (UID: \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\") " Jul 11 00:05:05.521545 kubelet[2472]: I0711 00:05:05.521536 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsklz\" (UniqueName: \"kubernetes.io/projected/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-kube-api-access-fsklz\") pod \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\" (UID: \"26cbee42-8511-493f-84f7-7ccd7c1cf1b4\") " Jul 11 00:05:05.524841 kubelet[2472]: I0711 00:05:05.524785 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-kube-api-access-fsklz" (OuterVolumeSpecName: "kube-api-access-fsklz") pod "26cbee42-8511-493f-84f7-7ccd7c1cf1b4" (UID: "26cbee42-8511-493f-84f7-7ccd7c1cf1b4"). InnerVolumeSpecName "kube-api-access-fsklz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:05:05.526392 systemd[1]: var-lib-kubelet-pods-26cbee42\x2d8511\x2d493f\x2d84f7\x2d7ccd7c1cf1b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfsklz.mount: Deactivated successfully. Jul 11 00:05:05.528781 kubelet[2472]: I0711 00:05:05.528742 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "26cbee42-8511-493f-84f7-7ccd7c1cf1b4" (UID: "26cbee42-8511-493f-84f7-7ccd7c1cf1b4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 11 00:05:05.538675 kubelet[2472]: I0711 00:05:05.538629 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "26cbee42-8511-493f-84f7-7ccd7c1cf1b4" (UID: "26cbee42-8511-493f-84f7-7ccd7c1cf1b4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:05:05.623104 kubelet[2472]: I0711 00:05:05.623063 2472 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsklz\" (UniqueName: \"kubernetes.io/projected/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-kube-api-access-fsklz\") on node \"localhost\" DevicePath \"\"" Jul 11 00:05:05.623104 kubelet[2472]: I0711 00:05:05.623096 2472 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 00:05:05.623104 kubelet[2472]: I0711 00:05:05.623107 2472 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26cbee42-8511-493f-84f7-7ccd7c1cf1b4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 00:05:05.892920 systemd[1]: Removed slice kubepods-besteffort-pod26cbee42_8511_493f_84f7_7ccd7c1cf1b4.slice - libcontainer container kubepods-besteffort-pod26cbee42_8511_493f_84f7_7ccd7c1cf1b4.slice. Jul 11 00:05:06.077120 systemd[1]: Created slice kubepods-besteffort-pod4336099f_ded9_45bb_af4b_482bec0e8677.slice - libcontainer container kubepods-besteffort-pod4336099f_ded9_45bb_af4b_482bec0e8677.slice. Jul 11 00:05:06.089352 systemd[1]: var-lib-kubelet-pods-26cbee42\x2d8511\x2d493f\x2d84f7\x2d7ccd7c1cf1b4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 00:05:06.105327 kubelet[2472]: I0711 00:05:06.105236 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:05:06.105676 kubelet[2472]: E0711 00:05:06.105653 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:06.228446 kubelet[2472]: I0711 00:05:06.228330 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4336099f-ded9-45bb-af4b-482bec0e8677-whisker-backend-key-pair\") pod \"whisker-dc44864cb-fvq9w\" (UID: \"4336099f-ded9-45bb-af4b-482bec0e8677\") " pod="calico-system/whisker-dc44864cb-fvq9w" Jul 11 00:05:06.228446 kubelet[2472]: I0711 00:05:06.228389 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcfvm\" (UniqueName: \"kubernetes.io/projected/4336099f-ded9-45bb-af4b-482bec0e8677-kube-api-access-wcfvm\") pod \"whisker-dc44864cb-fvq9w\" (UID: \"4336099f-ded9-45bb-af4b-482bec0e8677\") " pod="calico-system/whisker-dc44864cb-fvq9w" Jul 11 00:05:06.228446 kubelet[2472]: I0711 00:05:06.228420 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4336099f-ded9-45bb-af4b-482bec0e8677-whisker-ca-bundle\") pod \"whisker-dc44864cb-fvq9w\" (UID: \"4336099f-ded9-45bb-af4b-482bec0e8677\") " pod="calico-system/whisker-dc44864cb-fvq9w" Jul 11 00:05:06.380565 containerd[1432]: time="2025-07-11T00:05:06.380378378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc44864cb-fvq9w,Uid:4336099f-ded9-45bb-af4b-482bec0e8677,Namespace:calico-system,Attempt:0,}" Jul 11 00:05:06.511872 kernel: bpftool[4085]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 11 00:05:06.560001 systemd-networkd[1371]: cali8df8a7fc59f: Link UP Jul 11 00:05:06.560643 systemd-networkd[1371]: cali8df8a7fc59f: Gained carrier Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.462 [INFO][4046] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.477 [INFO][4046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--dc44864cb--fvq9w-eth0 whisker-dc44864cb- calico-system 4336099f-ded9-45bb-af4b-482bec0e8677 949 0 2025-07-11 00:05:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:dc44864cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-dc44864cb-fvq9w eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8df8a7fc59f [] [] }} ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Namespace="calico-system" Pod="whisker-dc44864cb-fvq9w" WorkloadEndpoint="localhost-k8s-whisker--dc44864cb--fvq9w-" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.477 [INFO][4046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Namespace="calico-system" Pod="whisker-dc44864cb-fvq9w" WorkloadEndpoint="localhost-k8s-whisker--dc44864cb--fvq9w-eth0" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.508 [INFO][4071] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" HandleID="k8s-pod-network.841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Workload="localhost-k8s-whisker--dc44864cb--fvq9w-eth0" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.509 [INFO][4071] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" HandleID="k8s-pod-network.841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Workload="localhost-k8s-whisker--dc44864cb--fvq9w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-dc44864cb-fvq9w", "timestamp":"2025-07-11 00:05:06.508952114 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.509 [INFO][4071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.511 [INFO][4071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.511 [INFO][4071] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.524 [INFO][4071] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" host="localhost" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.531 [INFO][4071] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.535 [INFO][4071] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.537 [INFO][4071] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.539 [INFO][4071] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.539 [INFO][4071] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" host="localhost" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.541 [INFO][4071] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.545 [INFO][4071] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" host="localhost" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.552 [INFO][4071] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" host="localhost" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.552 [INFO][4071] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" host="localhost" Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.552 [INFO][4071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:06.574383 containerd[1432]: 2025-07-11 00:05:06.552 [INFO][4071] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" HandleID="k8s-pod-network.841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Workload="localhost-k8s-whisker--dc44864cb--fvq9w-eth0" Jul 11 00:05:06.575264 containerd[1432]: 2025-07-11 00:05:06.554 [INFO][4046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Namespace="calico-system" Pod="whisker-dc44864cb-fvq9w" WorkloadEndpoint="localhost-k8s-whisker--dc44864cb--fvq9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--dc44864cb--fvq9w-eth0", GenerateName:"whisker-dc44864cb-", Namespace:"calico-system", SelfLink:"", UID:"4336099f-ded9-45bb-af4b-482bec0e8677", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dc44864cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-dc44864cb-fvq9w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8df8a7fc59f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:06.575264 containerd[1432]: 2025-07-11 00:05:06.554 [INFO][4046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Namespace="calico-system" Pod="whisker-dc44864cb-fvq9w" WorkloadEndpoint="localhost-k8s-whisker--dc44864cb--fvq9w-eth0" Jul 11 00:05:06.575264 containerd[1432]: 2025-07-11 00:05:06.554 [INFO][4046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8df8a7fc59f ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Namespace="calico-system" Pod="whisker-dc44864cb-fvq9w" WorkloadEndpoint="localhost-k8s-whisker--dc44864cb--fvq9w-eth0" Jul 11 00:05:06.575264 containerd[1432]: 2025-07-11 00:05:06.563 [INFO][4046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Namespace="calico-system" Pod="whisker-dc44864cb-fvq9w" WorkloadEndpoint="localhost-k8s-whisker--dc44864cb--fvq9w-eth0" Jul 11 00:05:06.575264 containerd[1432]: 2025-07-11 00:05:06.563 [INFO][4046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Namespace="calico-system" Pod="whisker-dc44864cb-fvq9w" WorkloadEndpoint="localhost-k8s-whisker--dc44864cb--fvq9w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--dc44864cb--fvq9w-eth0", GenerateName:"whisker-dc44864cb-", Namespace:"calico-system", SelfLink:"", UID:"4336099f-ded9-45bb-af4b-482bec0e8677", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 5, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"dc44864cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f", Pod:"whisker-dc44864cb-fvq9w", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8df8a7fc59f", MAC:"82:d9:65:9d:63:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:06.575264 containerd[1432]: 2025-07-11 00:05:06.572 [INFO][4046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f" Namespace="calico-system" Pod="whisker-dc44864cb-fvq9w" WorkloadEndpoint="localhost-k8s-whisker--dc44864cb--fvq9w-eth0" Jul 11 00:05:06.590755 containerd[1432]: time="2025-07-11T00:05:06.590176813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:05:06.590755 containerd[1432]: time="2025-07-11T00:05:06.590608870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:05:06.590755 containerd[1432]: time="2025-07-11T00:05:06.590621791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:06.590755 containerd[1432]: time="2025-07-11T00:05:06.590708794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:06.616047 systemd[1]: Started cri-containerd-841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f.scope - libcontainer container 841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f. Jul 11 00:05:06.627569 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:05:06.679450 containerd[1432]: time="2025-07-11T00:05:06.679384629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc44864cb-fvq9w,Uid:4336099f-ded9-45bb-af4b-482bec0e8677,Namespace:calico-system,Attempt:0,} returns sandbox id \"841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f\"" Jul 11 00:05:06.682441 containerd[1432]: time="2025-07-11T00:05:06.681273264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 00:05:06.703060 systemd-networkd[1371]: vxlan.calico: Link UP Jul 11 00:05:06.703069 systemd-networkd[1371]: vxlan.calico: Gained carrier Jul 11 00:05:07.016204 kubelet[2472]: E0711 00:05:07.016169 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:07.718315 containerd[1432]: time="2025-07-11T00:05:07.717617176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:07.727862 containerd[1432]: time="2025-07-11T00:05:07.718342964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 11 00:05:07.727862 containerd[1432]: time="2025-07-11T00:05:07.722953341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.041644115s" Jul 11 00:05:07.727862 containerd[1432]: time="2025-07-11T00:05:07.722985862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 11 00:05:07.727862 containerd[1432]: time="2025-07-11T00:05:07.724022662Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:07.727862 containerd[1432]: time="2025-07-11T00:05:07.724817052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:07.727862 containerd[1432]: time="2025-07-11T00:05:07.727515516Z" level=info msg="CreateContainer within sandbox \"841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 00:05:07.749110 containerd[1432]: time="2025-07-11T00:05:07.749063702Z" level=info msg="CreateContainer within sandbox \"841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"80d2d334ad848fbced6d15baaaec3f4dac07634e29695aeacbc09bb785361e96\"" Jul 11 00:05:07.749824 containerd[1432]: time="2025-07-11T00:05:07.749782530Z" level=info msg="StartContainer for \"80d2d334ad848fbced6d15baaaec3f4dac07634e29695aeacbc09bb785361e96\"" Jul 11 00:05:07.809063 systemd[1]: Started cri-containerd-80d2d334ad848fbced6d15baaaec3f4dac07634e29695aeacbc09bb785361e96.scope - libcontainer container 80d2d334ad848fbced6d15baaaec3f4dac07634e29695aeacbc09bb785361e96. Jul 11 00:05:07.859009 containerd[1432]: time="2025-07-11T00:05:07.858867513Z" level=info msg="StartContainer for \"80d2d334ad848fbced6d15baaaec3f4dac07634e29695aeacbc09bb785361e96\" returns successfully" Jul 11 00:05:07.862125 containerd[1432]: time="2025-07-11T00:05:07.861983592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 00:05:07.887460 kubelet[2472]: I0711 00:05:07.887409 2472 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26cbee42-8511-493f-84f7-7ccd7c1cf1b4" path="/var/lib/kubelet/pods/26cbee42-8511-493f-84f7-7ccd7c1cf1b4/volumes" Jul 11 00:05:08.002273 systemd-networkd[1371]: cali8df8a7fc59f: Gained IPv6LL Jul 11 00:05:08.640203 systemd-networkd[1371]: vxlan.calico: Gained IPv6LL Jul 11 00:05:09.631815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2560557451.mount: Deactivated successfully. Jul 11 00:05:09.647243 containerd[1432]: time="2025-07-11T00:05:09.647199292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:09.648172 containerd[1432]: time="2025-07-11T00:05:09.647742112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 11 00:05:09.649278 containerd[1432]: time="2025-07-11T00:05:09.649236125Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:09.651810 containerd[1432]: time="2025-07-11T00:05:09.651617411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:09.658014 containerd[1432]: time="2025-07-11T00:05:09.657969080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.795944966s" Jul 11 00:05:09.658119 containerd[1432]: time="2025-07-11T00:05:09.658039882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 11 00:05:09.661300 containerd[1432]: time="2025-07-11T00:05:09.661253638Z" level=info msg="CreateContainer within sandbox \"841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 00:05:09.671464 containerd[1432]: time="2025-07-11T00:05:09.671411204Z" level=info msg="CreateContainer within sandbox \"841f2809aae833af5b7b25f76603bbd53b138bf71b89b6a888abdf6c536ae10f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3fd870384b963ea0a32d5ca05912be0780e46dc815338504cfa749b44107a1a7\"" Jul 11 00:05:09.672105 containerd[1432]: time="2025-07-11T00:05:09.672062107Z" level=info msg="StartContainer for \"3fd870384b963ea0a32d5ca05912be0780e46dc815338504cfa749b44107a1a7\"" Jul 11 00:05:09.709116 systemd[1]: Started cri-containerd-3fd870384b963ea0a32d5ca05912be0780e46dc815338504cfa749b44107a1a7.scope - libcontainer container 3fd870384b963ea0a32d5ca05912be0780e46dc815338504cfa749b44107a1a7. Jul 11 00:05:09.754318 containerd[1432]: time="2025-07-11T00:05:09.754263987Z" level=info msg="StartContainer for \"3fd870384b963ea0a32d5ca05912be0780e46dc815338504cfa749b44107a1a7\" returns successfully" Jul 11 00:05:10.035156 kubelet[2472]: I0711 00:05:10.034983 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-dc44864cb-fvq9w" podStartSLOduration=1.057226764 podStartE2EDuration="4.034965379s" podCreationTimestamp="2025-07-11 00:05:06 +0000 UTC" firstStartedPulling="2025-07-11 00:05:06.681035414 +0000 UTC m=+34.893840160" lastFinishedPulling="2025-07-11 00:05:09.658774029 +0000 UTC m=+37.871578775" observedRunningTime="2025-07-11 00:05:10.03470817 +0000 UTC m=+38.247512916" watchObservedRunningTime="2025-07-11 00:05:10.034965379 +0000 UTC m=+38.247770085" Jul 11 00:05:12.882880 containerd[1432]: time="2025-07-11T00:05:12.882663288Z" level=info msg="StopPodSandbox for \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\"" Jul 11 00:05:12.882880 containerd[1432]: time="2025-07-11T00:05:12.882711610Z" level=info msg="StopPodSandbox for \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\"" Jul 11 00:05:12.882880 containerd[1432]: time="2025-07-11T00:05:12.882811893Z" level=info msg="StopPodSandbox for \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\"" Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.951 [INFO][4343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.951 [INFO][4343] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" iface="eth0" netns="/var/run/netns/cni-a728213b-f746-30f9-0678-f2cc9a9c97a9" Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.951 [INFO][4343] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" iface="eth0" netns="/var/run/netns/cni-a728213b-f746-30f9-0678-f2cc9a9c97a9" Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.952 [INFO][4343] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" iface="eth0" netns="/var/run/netns/cni-a728213b-f746-30f9-0678-f2cc9a9c97a9" Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.952 [INFO][4343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.952 [INFO][4343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.990 [INFO][4364] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" HandleID="k8s-pod-network.5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.990 [INFO][4364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.990 [INFO][4364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.999 [WARNING][4364] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" HandleID="k8s-pod-network.5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:12.999 [INFO][4364] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" HandleID="k8s-pod-network.5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:13.001 [INFO][4364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:13.006271 containerd[1432]: 2025-07-11 00:05:13.003 [INFO][4343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:13.011481 containerd[1432]: time="2025-07-11T00:05:13.007965060Z" level=info msg="TearDown network for sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\" successfully" Jul 11 00:05:13.011481 containerd[1432]: time="2025-07-11T00:05:13.008009421Z" level=info msg="StopPodSandbox for \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\" returns successfully" Jul 11 00:05:13.011481 containerd[1432]: time="2025-07-11T00:05:13.010222252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcf45dd9c-fb758,Uid:e57c9b88-19af-4e41-ab80-5de76c7ad975,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:05:13.008938 systemd[1]: run-netns-cni\x2da728213b\x2df746\x2d30f9\x2d0678\x2df2cc9a9c97a9.mount: Deactivated successfully. Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:12.957 [INFO][4342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:12.958 [INFO][4342] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" iface="eth0" netns="/var/run/netns/cni-0ca5ae54-6b10-5aff-b606-3427d4889094" Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:12.958 [INFO][4342] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" iface="eth0" netns="/var/run/netns/cni-0ca5ae54-6b10-5aff-b606-3427d4889094" Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:12.959 [INFO][4342] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" iface="eth0" netns="/var/run/netns/cni-0ca5ae54-6b10-5aff-b606-3427d4889094" Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:12.959 [INFO][4342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:12.959 [INFO][4342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:12.992 [INFO][4372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" HandleID="k8s-pod-network.9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:12.992 [INFO][4372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:13.001 [INFO][4372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:13.012 [WARNING][4372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" HandleID="k8s-pod-network.9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:13.012 [INFO][4372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" HandleID="k8s-pod-network.9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:13.034 [INFO][4372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:13.037723 containerd[1432]: 2025-07-11 00:05:13.036 [INFO][4342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:13.040353 containerd[1432]: time="2025-07-11T00:05:13.037892982Z" level=info msg="TearDown network for sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\" successfully" Jul 11 00:05:13.040353 containerd[1432]: time="2025-07-11T00:05:13.037927183Z" level=info msg="StopPodSandbox for \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\" returns successfully" Jul 11 00:05:13.039974 systemd[1]: run-netns-cni\x2d0ca5ae54\x2d6b10\x2d5aff\x2db606\x2d3427d4889094.mount: Deactivated successfully. Jul 11 00:05:13.040969 kubelet[2472]: E0711 00:05:13.040903 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:13.041510 containerd[1432]: time="2025-07-11T00:05:13.041317252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t59gl,Uid:2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6,Namespace:kube-system,Attempt:1,}" Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:12.955 [INFO][4341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:12.956 [INFO][4341] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" iface="eth0" netns="/var/run/netns/cni-9ac175f7-1844-7147-d0c6-d20a1c2186dd" Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:12.957 [INFO][4341] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" iface="eth0" netns="/var/run/netns/cni-9ac175f7-1844-7147-d0c6-d20a1c2186dd" Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:12.958 [INFO][4341] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" iface="eth0" netns="/var/run/netns/cni-9ac175f7-1844-7147-d0c6-d20a1c2186dd" Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:12.958 [INFO][4341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:12.958 [INFO][4341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:12.996 [INFO][4370] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:12.996 [INFO][4370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:13.034 [INFO][4370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:13.047 [WARNING][4370] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:13.047 [INFO][4370] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:13.053 [INFO][4370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:13.058970 containerd[1432]: 2025-07-11 00:05:13.056 [INFO][4341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:13.059445 containerd[1432]: time="2025-07-11T00:05:13.059117464Z" level=info msg="TearDown network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\" successfully" Jul 11 00:05:13.059445 containerd[1432]: time="2025-07-11T00:05:13.059143625Z" level=info msg="StopPodSandbox for \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\" returns successfully" Jul 11 00:05:13.061100 systemd[1]: run-netns-cni\x2d9ac175f7\x2d1844\x2d7147\x2dd0c6\x2dd20a1c2186dd.mount: Deactivated successfully. Jul 11 00:05:13.062431 containerd[1432]: time="2025-07-11T00:05:13.062376769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcf45dd9c-dssv7,Uid:dbf96476-6021-46fe-b842-fd862c7996c8,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:05:13.199784 systemd-networkd[1371]: caliebb2979b059: Link UP Jul 11 00:05:13.202465 systemd-networkd[1371]: caliebb2979b059: Gained carrier Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.107 [INFO][4390] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0 calico-apiserver-bcf45dd9c- calico-apiserver e57c9b88-19af-4e41-ab80-5de76c7ad975 993 0 2025-07-11 00:04:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bcf45dd9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bcf45dd9c-fb758 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliebb2979b059 [] [] }} ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-fb758" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.107 [INFO][4390] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-fb758" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.148 [INFO][4438] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" HandleID="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.148 [INFO][4438] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" HandleID="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bcf45dd9c-fb758", "timestamp":"2025-07-11 00:05:13.148386254 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.148 [INFO][4438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.148 [INFO][4438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.148 [INFO][4438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.161 [INFO][4438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" host="localhost" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.167 [INFO][4438] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.172 [INFO][4438] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.176 [INFO][4438] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.178 [INFO][4438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.178 [INFO][4438] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" host="localhost" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.180 [INFO][4438] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491 Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.185 [INFO][4438] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" host="localhost" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.190 [INFO][4438] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" host="localhost" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.190 [INFO][4438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" host="localhost" Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.190 [INFO][4438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:13.225337 containerd[1432]: 2025-07-11 00:05:13.190 [INFO][4438] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" HandleID="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:13.226063 containerd[1432]: 2025-07-11 00:05:13.196 [INFO][4390] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-fb758" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0", GenerateName:"calico-apiserver-bcf45dd9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e57c9b88-19af-4e41-ab80-5de76c7ad975", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcf45dd9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bcf45dd9c-fb758", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliebb2979b059", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:13.226063 containerd[1432]: 2025-07-11 00:05:13.196 [INFO][4390] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-fb758" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:13.226063 containerd[1432]: 2025-07-11 00:05:13.196 [INFO][4390] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebb2979b059 ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-fb758" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:13.226063 containerd[1432]: 2025-07-11 00:05:13.202 [INFO][4390] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-fb758" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:13.226063 containerd[1432]: 2025-07-11 00:05:13.204 [INFO][4390] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-fb758" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0", GenerateName:"calico-apiserver-bcf45dd9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e57c9b88-19af-4e41-ab80-5de76c7ad975", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcf45dd9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491", Pod:"calico-apiserver-bcf45dd9c-fb758", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliebb2979b059", MAC:"62:fa:b5:21:af:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:13.226063 containerd[1432]: 2025-07-11 00:05:13.214 [INFO][4390] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-fb758" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:13.242438 containerd[1432]: time="2025-07-11T00:05:13.242285632Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:05:13.242438 containerd[1432]: time="2025-07-11T00:05:13.242351875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:05:13.242599 containerd[1432]: time="2025-07-11T00:05:13.242451638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:13.242988 containerd[1432]: time="2025-07-11T00:05:13.242948614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:13.261100 systemd[1]: Started cri-containerd-c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491.scope - libcontainer container c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491. Jul 11 00:05:13.275959 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:05:13.299131 containerd[1432]: time="2025-07-11T00:05:13.299093459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcf45dd9c-fb758,Uid:e57c9b88-19af-4e41-ab80-5de76c7ad975,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491\"" Jul 11 00:05:13.300782 containerd[1432]: time="2025-07-11T00:05:13.300752792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:05:13.307277 systemd-networkd[1371]: cali11b7f30a3a9: Link UP Jul 11 00:05:13.308503 systemd-networkd[1371]: cali11b7f30a3a9: Gained carrier Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.110 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0 coredns-7c65d6cfc9- kube-system 2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6 995 0 2025-07-11 00:04:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-t59gl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali11b7f30a3a9 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t59gl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t59gl-" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.111 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t59gl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.160 [INFO][4435] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" HandleID="k8s-pod-network.c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.160 [INFO][4435] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" HandleID="k8s-pod-network.c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-t59gl", "timestamp":"2025-07-11 00:05:13.160159672 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.160 [INFO][4435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.190 [INFO][4435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.190 [INFO][4435] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.261 [INFO][4435] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" host="localhost" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.269 [INFO][4435] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.274 [INFO][4435] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.276 [INFO][4435] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.279 [INFO][4435] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.279 [INFO][4435] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" host="localhost" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.281 [INFO][4435] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9 Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.286 [INFO][4435] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" host="localhost" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.295 [INFO][4435] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" host="localhost" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.295 [INFO][4435] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" host="localhost" Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.295 [INFO][4435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:13.324379 containerd[1432]: 2025-07-11 00:05:13.295 [INFO][4435] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" HandleID="k8s-pod-network.c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:13.325086 containerd[1432]: 2025-07-11 00:05:13.302 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t59gl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-t59gl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11b7f30a3a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:13.325086 containerd[1432]: 2025-07-11 00:05:13.303 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t59gl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:13.325086 containerd[1432]: 2025-07-11 00:05:13.303 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11b7f30a3a9 ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t59gl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:13.325086 containerd[1432]: 2025-07-11 00:05:13.307 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t59gl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:13.325086 containerd[1432]: 2025-07-11 00:05:13.308 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t59gl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9", Pod:"coredns-7c65d6cfc9-t59gl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11b7f30a3a9", MAC:"7e:4d:ce:06:b1:20", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:13.325086 containerd[1432]: 2025-07-11 00:05:13.322 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t59gl" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:13.348374 containerd[1432]: time="2025-07-11T00:05:13.348251799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:05:13.348374 containerd[1432]: time="2025-07-11T00:05:13.348339242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:05:13.348374 containerd[1432]: time="2025-07-11T00:05:13.348352322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:13.348689 containerd[1432]: time="2025-07-11T00:05:13.348460646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:13.367459 systemd[1]: Started cri-containerd-c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9.scope - libcontainer container c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9. Jul 11 00:05:13.383129 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:05:13.401544 systemd-networkd[1371]: cali60fd2e7552d: Link UP Jul 11 00:05:13.402093 systemd-networkd[1371]: cali60fd2e7552d: Gained carrier Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.131 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0 calico-apiserver-bcf45dd9c- calico-apiserver dbf96476-6021-46fe-b842-fd862c7996c8 994 0 2025-07-11 00:04:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bcf45dd9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-bcf45dd9c-dssv7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali60fd2e7552d [] [] }} ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-dssv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.132 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-dssv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.167 [INFO][4450] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.167 [INFO][4450] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-bcf45dd9c-dssv7", "timestamp":"2025-07-11 00:05:13.167707515 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.167 [INFO][4450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.295 [INFO][4450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.296 [INFO][4450] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.362 [INFO][4450] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" host="localhost" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.369 [INFO][4450] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.376 [INFO][4450] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.378 [INFO][4450] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.381 [INFO][4450] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.382 [INFO][4450] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" host="localhost" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.383 [INFO][4450] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.389 [INFO][4450] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" host="localhost" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.395 [INFO][4450] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" host="localhost" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.395 [INFO][4450] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" host="localhost" Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.395 [INFO][4450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:13.420826 containerd[1432]: 2025-07-11 00:05:13.395 [INFO][4450] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:13.421625 containerd[1432]: 2025-07-11 00:05:13.398 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-dssv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0", GenerateName:"calico-apiserver-bcf45dd9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbf96476-6021-46fe-b842-fd862c7996c8", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcf45dd9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-bcf45dd9c-dssv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60fd2e7552d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:13.421625 containerd[1432]: 2025-07-11 00:05:13.398 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-dssv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:13.421625 containerd[1432]: 2025-07-11 00:05:13.398 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60fd2e7552d ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-dssv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:13.421625 containerd[1432]: 2025-07-11 00:05:13.402 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-dssv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:13.421625 containerd[1432]: 2025-07-11 00:05:13.402 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-dssv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0", GenerateName:"calico-apiserver-bcf45dd9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbf96476-6021-46fe-b842-fd862c7996c8", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcf45dd9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df", Pod:"calico-apiserver-bcf45dd9c-dssv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60fd2e7552d", MAC:"3e:67:fa:31:a0:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:13.421625 containerd[1432]: 2025-07-11 00:05:13.416 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Namespace="calico-apiserver" Pod="calico-apiserver-bcf45dd9c-dssv7" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:13.430004 containerd[1432]: time="2025-07-11T00:05:13.429953945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t59gl,Uid:2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6,Namespace:kube-system,Attempt:1,} returns sandbox id \"c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9\"" Jul 11 00:05:13.431870 kubelet[2472]: E0711 00:05:13.431833 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:13.436595 containerd[1432]: time="2025-07-11T00:05:13.434920425Z" level=info msg="CreateContainer within sandbox \"c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:05:13.448950 containerd[1432]: time="2025-07-11T00:05:13.441801566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:05:13.448950 containerd[1432]: time="2025-07-11T00:05:13.448885314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:05:13.448950 containerd[1432]: time="2025-07-11T00:05:13.448899155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:13.449318 containerd[1432]: time="2025-07-11T00:05:13.449001758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:13.454120 containerd[1432]: time="2025-07-11T00:05:13.454007399Z" level=info msg="CreateContainer within sandbox \"c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8c72c3599ae80a28767adbc27c77ba3ff4d34c837754393ab31e4e706233485\"" Jul 11 00:05:13.454574 containerd[1432]: time="2025-07-11T00:05:13.454549256Z" level=info msg="StartContainer for \"d8c72c3599ae80a28767adbc27c77ba3ff4d34c837754393ab31e4e706233485\"" Jul 11 00:05:13.470078 systemd[1]: Started cri-containerd-bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df.scope - libcontainer container bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df. Jul 11 00:05:13.487986 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:05:13.495059 systemd[1]: Started cri-containerd-d8c72c3599ae80a28767adbc27c77ba3ff4d34c837754393ab31e4e706233485.scope - libcontainer container d8c72c3599ae80a28767adbc27c77ba3ff4d34c837754393ab31e4e706233485. Jul 11 00:05:13.512557 containerd[1432]: time="2025-07-11T00:05:13.512495239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bcf45dd9c-dssv7,Uid:dbf96476-6021-46fe-b842-fd862c7996c8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\"" Jul 11 00:05:13.527625 containerd[1432]: time="2025-07-11T00:05:13.527503761Z" level=info msg="StartContainer for \"d8c72c3599ae80a28767adbc27c77ba3ff4d34c837754393ab31e4e706233485\" returns successfully" Jul 11 00:05:13.883829 containerd[1432]: time="2025-07-11T00:05:13.883743054Z" level=info msg="StopPodSandbox for \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\"" Jul 11 00:05:13.884295 containerd[1432]: time="2025-07-11T00:05:13.883835097Z" level=info msg="StopPodSandbox for \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\"" Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:13.950 [INFO][4668] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:13.951 [INFO][4668] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" iface="eth0" netns="/var/run/netns/cni-89202130-e59f-df46-7818-88e33698c6a7" Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:13.951 [INFO][4668] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" iface="eth0" netns="/var/run/netns/cni-89202130-e59f-df46-7818-88e33698c6a7" Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:13.952 [INFO][4668] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" iface="eth0" netns="/var/run/netns/cni-89202130-e59f-df46-7818-88e33698c6a7" Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:13.952 [INFO][4668] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:13.952 [INFO][4668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:14.007 [INFO][4686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" HandleID="k8s-pod-network.1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:14.007 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:14.007 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:14.020 [WARNING][4686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" HandleID="k8s-pod-network.1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:14.021 [INFO][4686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" HandleID="k8s-pod-network.1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:14.022 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:14.026650 containerd[1432]: 2025-07-11 00:05:14.025 [INFO][4668] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:14.027981 containerd[1432]: time="2025-07-11T00:05:14.026876554Z" level=info msg="TearDown network for sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\" successfully" Jul 11 00:05:14.027981 containerd[1432]: time="2025-07-11T00:05:14.026904515Z" level=info msg="StopPodSandbox for \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\" returns successfully" Jul 11 00:05:14.028031 kubelet[2472]: E0711 00:05:14.027244 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:14.028492 containerd[1432]: time="2025-07-11T00:05:14.028464764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fj8dx,Uid:78d5efbc-6d02-414b-bb0f-acc8122968c7,Namespace:kube-system,Attempt:1,}" Jul 11 00:05:14.030398 systemd[1]: run-netns-cni\x2d89202130\x2de59f\x2ddf46\x2d7818\x2d88e33698c6a7.mount: Deactivated successfully. Jul 11 00:05:14.045774 kubelet[2472]: E0711 00:05:14.044793 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:14.076596 kubelet[2472]: I0711 00:05:14.076520 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-t59gl" podStartSLOduration=36.076501108 podStartE2EDuration="36.076501108s" podCreationTimestamp="2025-07-11 00:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:05:14.061399035 +0000 UTC m=+42.274203781" watchObservedRunningTime="2025-07-11 00:05:14.076501108 +0000 UTC m=+42.289305854" Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.031 [INFO][4678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.035 [INFO][4678] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" iface="eth0" netns="/var/run/netns/cni-9f8e41fc-231e-213b-9f07-85d1da5c123b" Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.035 [INFO][4678] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" iface="eth0" netns="/var/run/netns/cni-9f8e41fc-231e-213b-9f07-85d1da5c123b" Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.036 [INFO][4678] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" iface="eth0" netns="/var/run/netns/cni-9f8e41fc-231e-213b-9f07-85d1da5c123b" Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.036 [INFO][4678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.036 [INFO][4678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.078 [INFO][4698] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" HandleID="k8s-pod-network.9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.078 [INFO][4698] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.078 [INFO][4698] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.094 [WARNING][4698] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" HandleID="k8s-pod-network.9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.094 [INFO][4698] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" HandleID="k8s-pod-network.9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.100 [INFO][4698] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:14.105313 containerd[1432]: 2025-07-11 00:05:14.102 [INFO][4678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:14.105770 containerd[1432]: time="2025-07-11T00:05:14.105659182Z" level=info msg="TearDown network for sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\" successfully" Jul 11 00:05:14.105770 containerd[1432]: time="2025-07-11T00:05:14.105693703Z" level=info msg="StopPodSandbox for \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\" returns successfully" Jul 11 00:05:14.108029 systemd[1]: run-netns-cni\x2d9f8e41fc\x2d231e\x2d213b\x2d9f07\x2d85d1da5c123b.mount: Deactivated successfully. Jul 11 00:05:14.109567 containerd[1432]: time="2025-07-11T00:05:14.109256175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ts85z,Uid:1a8f323e-1163-4dff-a9f9-a0583b72a07e,Namespace:calico-system,Attempt:1,}" Jul 11 00:05:14.227809 systemd-networkd[1371]: cali1e6f9e96cfa: Link UP Jul 11 00:05:14.228741 systemd-networkd[1371]: cali1e6f9e96cfa: Gained carrier Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.132 [INFO][4706] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0 coredns-7c65d6cfc9- kube-system 78d5efbc-6d02-414b-bb0f-acc8122968c7 1016 0 2025-07-11 00:04:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-fj8dx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e6f9e96cfa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fj8dx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fj8dx-" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.133 [INFO][4706] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fj8dx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.172 [INFO][4738] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" HandleID="k8s-pod-network.bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.173 [INFO][4738] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" HandleID="k8s-pod-network.bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136480), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-fj8dx", "timestamp":"2025-07-11 00:05:14.172823526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.173 [INFO][4738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.173 [INFO][4738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.173 [INFO][4738] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.184 [INFO][4738] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" host="localhost" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.191 [INFO][4738] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.197 [INFO][4738] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.199 [INFO][4738] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.207 [INFO][4738] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.207 [INFO][4738] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" host="localhost" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.209 [INFO][4738] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5 Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.214 [INFO][4738] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" host="localhost" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.221 [INFO][4738] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" host="localhost" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.222 [INFO][4738] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" host="localhost" Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.222 [INFO][4738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:14.243694 containerd[1432]: 2025-07-11 00:05:14.222 [INFO][4738] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" HandleID="k8s-pod-network.bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:14.244597 containerd[1432]: 2025-07-11 00:05:14.225 [INFO][4706] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fj8dx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"78d5efbc-6d02-414b-bb0f-acc8122968c7", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-fj8dx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e6f9e96cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:14.244597 containerd[1432]: 2025-07-11 00:05:14.226 [INFO][4706] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fj8dx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:14.244597 containerd[1432]: 2025-07-11 00:05:14.226 [INFO][4706] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e6f9e96cfa ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fj8dx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:14.244597 containerd[1432]: 2025-07-11 00:05:14.229 [INFO][4706] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fj8dx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:14.244597 containerd[1432]: 2025-07-11 00:05:14.229 [INFO][4706] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fj8dx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"78d5efbc-6d02-414b-bb0f-acc8122968c7", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5", Pod:"coredns-7c65d6cfc9-fj8dx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e6f9e96cfa", MAC:"aa:c9:d2:a7:9f:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:14.244597 containerd[1432]: 2025-07-11 00:05:14.240 [INFO][4706] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-fj8dx" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:14.262787 containerd[1432]: time="2025-07-11T00:05:14.262665381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:05:14.262787 containerd[1432]: time="2025-07-11T00:05:14.262735943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:05:14.262787 containerd[1432]: time="2025-07-11T00:05:14.262746583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:14.263174 containerd[1432]: time="2025-07-11T00:05:14.262872907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:14.272498 systemd-networkd[1371]: caliebb2979b059: Gained IPv6LL Jul 11 00:05:14.280101 systemd[1]: Started cri-containerd-bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5.scope - libcontainer container bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5. Jul 11 00:05:14.293136 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:05:14.322537 systemd-networkd[1371]: califa2fc2be6a4: Link UP Jul 11 00:05:14.323737 systemd-networkd[1371]: califa2fc2be6a4: Gained carrier Jul 11 00:05:14.331116 containerd[1432]: time="2025-07-11T00:05:14.331060403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-fj8dx,Uid:78d5efbc-6d02-414b-bb0f-acc8122968c7,Namespace:kube-system,Attempt:1,} returns sandbox id \"bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5\"" Jul 11 00:05:14.332271 kubelet[2472]: E0711 00:05:14.332239 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:14.339851 containerd[1432]: time="2025-07-11T00:05:14.339710474Z" level=info msg="CreateContainer within sandbox \"bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.162 [INFO][4724] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--ts85z-eth0 goldmane-58fd7646b9- calico-system 1a8f323e-1163-4dff-a9f9-a0583b72a07e 1017 0 2025-07-11 00:04:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-ts85z eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califa2fc2be6a4 [] [] }} ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Namespace="calico-system" Pod="goldmane-58fd7646b9-ts85z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ts85z-" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.163 [INFO][4724] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Namespace="calico-system" Pod="goldmane-58fd7646b9-ts85z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.201 [INFO][4748] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" HandleID="k8s-pod-network.435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.201 [INFO][4748] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" HandleID="k8s-pod-network.435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-ts85z", "timestamp":"2025-07-11 00:05:14.20104449 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.201 [INFO][4748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.222 [INFO][4748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.222 [INFO][4748] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.285 [INFO][4748] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" host="localhost" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.292 [INFO][4748] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.300 [INFO][4748] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.303 [INFO][4748] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.306 [INFO][4748] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.306 [INFO][4748] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" host="localhost" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.308 [INFO][4748] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.311 [INFO][4748] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" host="localhost" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.317 [INFO][4748] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" host="localhost" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.318 [INFO][4748] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" host="localhost" Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.318 [INFO][4748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:14.342796 containerd[1432]: 2025-07-11 00:05:14.318 [INFO][4748] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" HandleID="k8s-pod-network.435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:14.343728 containerd[1432]: 2025-07-11 00:05:14.320 [INFO][4724] cni-plugin/k8s.go 418: Populated endpoint ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Namespace="calico-system" Pod="goldmane-58fd7646b9-ts85z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ts85z-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"1a8f323e-1163-4dff-a9f9-a0583b72a07e", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-ts85z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa2fc2be6a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:14.343728 containerd[1432]: 2025-07-11 00:05:14.320 [INFO][4724] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Namespace="calico-system" Pod="goldmane-58fd7646b9-ts85z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:14.343728 containerd[1432]: 2025-07-11 00:05:14.321 [INFO][4724] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa2fc2be6a4 ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Namespace="calico-system" Pod="goldmane-58fd7646b9-ts85z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:14.343728 containerd[1432]: 2025-07-11 00:05:14.323 [INFO][4724] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Namespace="calico-system" Pod="goldmane-58fd7646b9-ts85z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:14.343728 containerd[1432]: 2025-07-11 00:05:14.324 [INFO][4724] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Namespace="calico-system" Pod="goldmane-58fd7646b9-ts85z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ts85z-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"1a8f323e-1163-4dff-a9f9-a0583b72a07e", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f", Pod:"goldmane-58fd7646b9-ts85z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa2fc2be6a4", MAC:"ae:ba:b6:f3:96:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:14.343728 containerd[1432]: 2025-07-11 00:05:14.338 [INFO][4724] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f" Namespace="calico-system" Pod="goldmane-58fd7646b9-ts85z" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:14.362311 containerd[1432]: time="2025-07-11T00:05:14.362249620Z" level=info msg="CreateContainer within sandbox \"bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f22e86ec9cd1afafda8ada94ad3cbb22d9f79379a81ee86d67799e84a15907da\"" Jul 11 00:05:14.364241 containerd[1432]: time="2025-07-11T00:05:14.364200522Z" level=info msg="StartContainer for \"f22e86ec9cd1afafda8ada94ad3cbb22d9f79379a81ee86d67799e84a15907da\"" Jul 11 00:05:14.370601 containerd[1432]: time="2025-07-11T00:05:14.368569618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:05:14.370601 containerd[1432]: time="2025-07-11T00:05:14.368824346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:05:14.370601 containerd[1432]: time="2025-07-11T00:05:14.368861548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:14.370601 containerd[1432]: time="2025-07-11T00:05:14.368961711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:14.398176 systemd[1]: Started cri-containerd-435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f.scope - libcontainer container 435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f. Jul 11 00:05:14.402305 systemd[1]: Started cri-containerd-f22e86ec9cd1afafda8ada94ad3cbb22d9f79379a81ee86d67799e84a15907da.scope - libcontainer container f22e86ec9cd1afafda8ada94ad3cbb22d9f79379a81ee86d67799e84a15907da. Jul 11 00:05:14.419298 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:05:14.459440 containerd[1432]: time="2025-07-11T00:05:14.459384384Z" level=info msg="StartContainer for \"f22e86ec9cd1afafda8ada94ad3cbb22d9f79379a81ee86d67799e84a15907da\" returns successfully" Jul 11 00:05:14.489233 containerd[1432]: time="2025-07-11T00:05:14.489095514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ts85z,Uid:1a8f323e-1163-4dff-a9f9-a0583b72a07e,Namespace:calico-system,Attempt:1,} returns sandbox id \"435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f\"" Jul 11 00:05:14.592293 systemd-networkd[1371]: cali60fd2e7552d: Gained IPv6LL Jul 11 00:05:15.055598 kubelet[2472]: E0711 00:05:15.054303 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:15.055598 kubelet[2472]: E0711 00:05:15.054383 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:15.094380 kubelet[2472]: I0711 00:05:15.094153 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-fj8dx" podStartSLOduration=37.094132117 podStartE2EDuration="37.094132117s" podCreationTimestamp="2025-07-11 00:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:05:15.078541401 +0000 UTC m=+43.291346187" watchObservedRunningTime="2025-07-11 00:05:15.094132117 +0000 UTC m=+43.306936863" Jul 11 00:05:15.104008 systemd-networkd[1371]: cali11b7f30a3a9: Gained IPv6LL Jul 11 00:05:15.269214 systemd[1]: Started sshd@7-10.0.0.27:22-10.0.0.1:54384.service - OpenSSH per-connection server daemon (10.0.0.1:54384). Jul 11 00:05:15.341197 containerd[1432]: time="2025-07-11T00:05:15.341139946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:15.345170 containerd[1432]: time="2025-07-11T00:05:15.341652242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 11 00:05:15.345170 containerd[1432]: time="2025-07-11T00:05:15.342507748Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:15.345170 containerd[1432]: time="2025-07-11T00:05:15.344709495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:15.346190 containerd[1432]: time="2025-07-11T00:05:15.346147059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.045220182s" Jul 11 00:05:15.346259 containerd[1432]: time="2025-07-11T00:05:15.346193660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 11 00:05:15.348036 containerd[1432]: time="2025-07-11T00:05:15.347999396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 00:05:15.350968 containerd[1432]: time="2025-07-11T00:05:15.350680758Z" level=info msg="CreateContainer within sandbox \"c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:05:15.371894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2015708352.mount: Deactivated successfully. Jul 11 00:05:15.373456 containerd[1432]: time="2025-07-11T00:05:15.373401292Z" level=info msg="CreateContainer within sandbox \"c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\"" Jul 11 00:05:15.374127 containerd[1432]: time="2025-07-11T00:05:15.374021751Z" level=info msg="StartContainer for \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\"" Jul 11 00:05:15.405912 sshd[4905]: Accepted publickey for core from 10.0.0.1 port 54384 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:15.408077 sshd[4905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:15.408086 systemd[1]: Started cri-containerd-b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6.scope - libcontainer container b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6. Jul 11 00:05:15.413544 systemd-logind[1416]: New session 8 of user core. Jul 11 00:05:15.422070 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 00:05:15.425255 systemd-networkd[1371]: califa2fc2be6a4: Gained IPv6LL Jul 11 00:05:15.456631 containerd[1432]: time="2025-07-11T00:05:15.456573114Z" level=info msg="StartContainer for \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\" returns successfully" Jul 11 00:05:15.489640 systemd-networkd[1371]: cali1e6f9e96cfa: Gained IPv6LL Jul 11 00:05:15.576583 containerd[1432]: time="2025-07-11T00:05:15.576128287Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:15.577172 containerd[1432]: time="2025-07-11T00:05:15.577133358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 11 00:05:15.579780 containerd[1432]: time="2025-07-11T00:05:15.579514191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 231.479354ms" Jul 11 00:05:15.579780 containerd[1432]: time="2025-07-11T00:05:15.579563152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 11 00:05:15.584082 containerd[1432]: time="2025-07-11T00:05:15.581106559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 00:05:15.585178 containerd[1432]: time="2025-07-11T00:05:15.585026999Z" level=info msg="CreateContainer within sandbox \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:05:15.596201 containerd[1432]: time="2025-07-11T00:05:15.596055176Z" level=info msg="CreateContainer within sandbox \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\"" Jul 11 00:05:15.597897 containerd[1432]: time="2025-07-11T00:05:15.596578552Z" level=info msg="StartContainer for \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\"" Jul 11 00:05:15.638037 systemd[1]: Started cri-containerd-8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3.scope - libcontainer container 8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3. Jul 11 00:05:15.679076 containerd[1432]: time="2025-07-11T00:05:15.679014752Z" level=info msg="StartContainer for \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\" returns successfully" Jul 11 00:05:15.690105 sshd[4905]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:15.696592 systemd[1]: sshd@7-10.0.0.27:22-10.0.0.1:54384.service: Deactivated successfully. Jul 11 00:05:15.699673 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 00:05:15.700649 systemd-logind[1416]: Session 8 logged out. Waiting for processes to exit. Jul 11 00:05:15.702173 systemd-logind[1416]: Removed session 8. Jul 11 00:05:15.884709 containerd[1432]: time="2025-07-11T00:05:15.884566113Z" level=info msg="StopPodSandbox for \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\"" Jul 11 00:05:15.887166 containerd[1432]: time="2025-07-11T00:05:15.886071319Z" level=info msg="StopPodSandbox for \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\"" Jul 11 00:05:15.887166 containerd[1432]: time="2025-07-11T00:05:15.886084640Z" level=info msg="StopPodSandbox for \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\"" Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:15.957 [INFO][5062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:15.958 [INFO][5062] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" iface="eth0" netns="/var/run/netns/cni-a418bd1a-ea92-6c65-2853-81f445b655c3" Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:15.958 [INFO][5062] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" iface="eth0" netns="/var/run/netns/cni-a418bd1a-ea92-6c65-2853-81f445b655c3" Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:15.960 [INFO][5062] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" iface="eth0" netns="/var/run/netns/cni-a418bd1a-ea92-6c65-2853-81f445b655c3" Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:15.960 [INFO][5062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:15.960 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:16.008 [INFO][5079] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" HandleID="k8s-pod-network.0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:16.009 [INFO][5079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:16.009 [INFO][5079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:16.020 [WARNING][5079] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" HandleID="k8s-pod-network.0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:16.020 [INFO][5079] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" HandleID="k8s-pod-network.0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:16.024 [INFO][5079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:16.030479 containerd[1432]: 2025-07-11 00:05:16.027 [INFO][5062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:16.034744 containerd[1432]: time="2025-07-11T00:05:16.030688597Z" level=info msg="TearDown network for sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\" successfully" Jul 11 00:05:16.034744 containerd[1432]: time="2025-07-11T00:05:16.030723758Z" level=info msg="StopPodSandbox for \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\" returns successfully" Jul 11 00:05:16.033196 systemd[1]: run-netns-cni\x2da418bd1a\x2dea92\x2d6c65\x2d2853\x2d81f445b655c3.mount: Deactivated successfully. Jul 11 00:05:16.035270 containerd[1432]: time="2025-07-11T00:05:16.035205372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7688b6944f-z5wj7,Uid:f2557641-a3f6-4d30-9fc0-458168133b25,Namespace:calico-system,Attempt:1,}" Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:15.992 [INFO][5047] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:15.992 [INFO][5047] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" iface="eth0" netns="/var/run/netns/cni-fbdf24ab-434f-e391-29b7-1e632871d4d7" Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:15.993 [INFO][5047] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" iface="eth0" netns="/var/run/netns/cni-fbdf24ab-434f-e391-29b7-1e632871d4d7" Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:15.993 [INFO][5047] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" iface="eth0" netns="/var/run/netns/cni-fbdf24ab-434f-e391-29b7-1e632871d4d7" Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:15.993 [INFO][5047] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:15.993 [INFO][5047] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:16.029 [INFO][5087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" HandleID="k8s-pod-network.fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:16.030 [INFO][5087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:16.030 [INFO][5087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:16.047 [WARNING][5087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" HandleID="k8s-pod-network.fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:16.047 [INFO][5087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" HandleID="k8s-pod-network.fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:16.049 [INFO][5087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:16.057026 containerd[1432]: 2025-07-11 00:05:16.053 [INFO][5047] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:16.059528 systemd[1]: run-netns-cni\x2dfbdf24ab\x2d434f\x2de391\x2d29b7\x2d1e632871d4d7.mount: Deactivated successfully. Jul 11 00:05:16.059958 containerd[1432]: time="2025-07-11T00:05:16.059638021Z" level=info msg="TearDown network for sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\" successfully" Jul 11 00:05:16.059958 containerd[1432]: time="2025-07-11T00:05:16.059677542Z" level=info msg="StopPodSandbox for \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\" returns successfully" Jul 11 00:05:16.062067 containerd[1432]: time="2025-07-11T00:05:16.061758604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8t882,Uid:69b81cc5-c8b0-45b3-aeb9-88292bebdc48,Namespace:calico-system,Attempt:1,}" Jul 11 00:05:16.077861 kubelet[2472]: E0711 00:05:16.076843 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:16.078623 kubelet[2472]: E0711 00:05:16.077235 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:15.997 [INFO][5063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:15.998 [INFO][5063] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" iface="eth0" netns="/var/run/netns/cni-cded8476-1eb4-e068-0b76-f9cd53efe278" Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:15.998 [INFO][5063] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" iface="eth0" netns="/var/run/netns/cni-cded8476-1eb4-e068-0b76-f9cd53efe278" Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:15.999 [INFO][5063] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" iface="eth0" netns="/var/run/netns/cni-cded8476-1eb4-e068-0b76-f9cd53efe278" Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:15.999 [INFO][5063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:15.999 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:16.043 [INFO][5089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" HandleID="k8s-pod-network.9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:16.043 [INFO][5089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:16.051 [INFO][5089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:16.067 [WARNING][5089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" HandleID="k8s-pod-network.9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:16.067 [INFO][5089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" HandleID="k8s-pod-network.9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:16.074 [INFO][5089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:16.084953 containerd[1432]: 2025-07-11 00:05:16.078 [INFO][5063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:16.087399 containerd[1432]: time="2025-07-11T00:05:16.087330927Z" level=info msg="TearDown network for sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\" successfully" Jul 11 00:05:16.087399 containerd[1432]: time="2025-07-11T00:05:16.087371289Z" level=info msg="StopPodSandbox for \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\" returns successfully" Jul 11 00:05:16.088298 containerd[1432]: time="2025-07-11T00:05:16.088215834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567bf94b46-mxmnt,Uid:d36dbd41-8dff-4eb6-a8f2-69b019debb74,Namespace:calico-apiserver,Attempt:1,}" Jul 11 00:05:16.089863 kubelet[2472]: I0711 00:05:16.089525 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bcf45dd9c-dssv7" podStartSLOduration=26.022940376 podStartE2EDuration="28.089508272s" podCreationTimestamp="2025-07-11 00:04:48 +0000 UTC" firstStartedPulling="2025-07-11 00:05:13.513961886 +0000 UTC m=+41.726766592" lastFinishedPulling="2025-07-11 00:05:15.580529742 +0000 UTC m=+43.793334488" observedRunningTime="2025-07-11 00:05:16.089069299 +0000 UTC m=+44.301874085" watchObservedRunningTime="2025-07-11 00:05:16.089508272 +0000 UTC m=+44.302313018" Jul 11 00:05:16.102301 kubelet[2472]: I0711 00:05:16.101433 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bcf45dd9c-fb758" podStartSLOduration=26.054431952 podStartE2EDuration="28.101416988s" podCreationTimestamp="2025-07-11 00:04:48 +0000 UTC" firstStartedPulling="2025-07-11 00:05:13.300537465 +0000 UTC m=+41.513342171" lastFinishedPulling="2025-07-11 00:05:15.347522461 +0000 UTC m=+43.560327207" observedRunningTime="2025-07-11 00:05:16.100759728 +0000 UTC m=+44.313564474" watchObservedRunningTime="2025-07-11 00:05:16.101416988 +0000 UTC m=+44.314221694" Jul 11 00:05:16.256731 systemd-networkd[1371]: calif99ae2c31c8: Link UP Jul 11 00:05:16.257006 systemd-networkd[1371]: calif99ae2c31c8: Gained carrier Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.152 [INFO][5105] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0 calico-kube-controllers-7688b6944f- calico-system f2557641-a3f6-4d30-9fc0-458168133b25 1094 0 2025-07-11 00:04:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7688b6944f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7688b6944f-z5wj7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif99ae2c31c8 [] [] }} ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Namespace="calico-system" Pod="calico-kube-controllers-7688b6944f-z5wj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.152 [INFO][5105] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Namespace="calico-system" Pod="calico-kube-controllers-7688b6944f-z5wj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.193 [INFO][5155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" HandleID="k8s-pod-network.2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.193 [INFO][5155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" HandleID="k8s-pod-network.2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003555f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7688b6944f-z5wj7", "timestamp":"2025-07-11 00:05:16.193101924 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.193 [INFO][5155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.193 [INFO][5155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.193 [INFO][5155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.206 [INFO][5155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" host="localhost" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.221 [INFO][5155] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.226 [INFO][5155] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.228 [INFO][5155] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.231 [INFO][5155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.231 [INFO][5155] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" host="localhost" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.232 [INFO][5155] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.236 [INFO][5155] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" host="localhost" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.243 [INFO][5155] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" host="localhost" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.243 [INFO][5155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" host="localhost" Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.243 [INFO][5155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:16.273742 containerd[1432]: 2025-07-11 00:05:16.243 [INFO][5155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" HandleID="k8s-pod-network.2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:16.274632 containerd[1432]: 2025-07-11 00:05:16.245 [INFO][5105] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Namespace="calico-system" Pod="calico-kube-controllers-7688b6944f-z5wj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0", GenerateName:"calico-kube-controllers-7688b6944f-", Namespace:"calico-system", SelfLink:"", UID:"f2557641-a3f6-4d30-9fc0-458168133b25", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7688b6944f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7688b6944f-z5wj7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif99ae2c31c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:16.274632 containerd[1432]: 2025-07-11 00:05:16.245 [INFO][5105] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Namespace="calico-system" Pod="calico-kube-controllers-7688b6944f-z5wj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:16.274632 containerd[1432]: 2025-07-11 00:05:16.245 [INFO][5105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif99ae2c31c8 ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Namespace="calico-system" Pod="calico-kube-controllers-7688b6944f-z5wj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:16.274632 containerd[1432]: 2025-07-11 00:05:16.256 [INFO][5105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Namespace="calico-system" Pod="calico-kube-controllers-7688b6944f-z5wj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:16.274632 containerd[1432]: 2025-07-11 00:05:16.257 [INFO][5105] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Namespace="calico-system" Pod="calico-kube-controllers-7688b6944f-z5wj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0", GenerateName:"calico-kube-controllers-7688b6944f-", Namespace:"calico-system", SelfLink:"", UID:"f2557641-a3f6-4d30-9fc0-458168133b25", ResourceVersion:"1094", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7688b6944f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca", Pod:"calico-kube-controllers-7688b6944f-z5wj7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif99ae2c31c8", MAC:"d2:84:c7:15:4f:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:16.274632 containerd[1432]: 2025-07-11 00:05:16.268 [INFO][5105] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca" Namespace="calico-system" Pod="calico-kube-controllers-7688b6944f-z5wj7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:16.311253 containerd[1432]: time="2025-07-11T00:05:16.311075684Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:05:16.311456 containerd[1432]: time="2025-07-11T00:05:16.311232369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:05:16.311525 containerd[1432]: time="2025-07-11T00:05:16.311441055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:16.312736 containerd[1432]: time="2025-07-11T00:05:16.312098434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:16.336084 systemd[1]: Started cri-containerd-2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca.scope - libcontainer container 2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca. Jul 11 00:05:16.366188 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:05:16.380541 systemd-networkd[1371]: calid806a331429: Link UP Jul 11 00:05:16.380795 systemd-networkd[1371]: calid806a331429: Gained carrier Jul 11 00:05:16.400039 containerd[1432]: time="2025-07-11T00:05:16.399965336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7688b6944f-z5wj7,Uid:f2557641-a3f6-4d30-9fc0-458168133b25,Namespace:calico-system,Attempt:1,} returns sandbox id \"2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca\"" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.154 [INFO][5119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8t882-eth0 csi-node-driver- calico-system 69b81cc5-c8b0-45b3-aeb9-88292bebdc48 1096 0 2025-07-11 00:04:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8t882 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid806a331429 [] [] }} ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Namespace="calico-system" Pod="csi-node-driver-8t882" WorkloadEndpoint="localhost-k8s-csi--node--driver--8t882-" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.154 [INFO][5119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Namespace="calico-system" Pod="csi-node-driver-8t882" WorkloadEndpoint="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.209 [INFO][5156] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" HandleID="k8s-pod-network.eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.217 [INFO][5156] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" HandleID="k8s-pod-network.eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Workload="localhost-k8s-csi--node--driver--8t882-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000353210), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8t882", "timestamp":"2025-07-11 00:05:16.209418171 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.217 [INFO][5156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.243 [INFO][5156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.243 [INFO][5156] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.307 [INFO][5156] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" host="localhost" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.325 [INFO][5156] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.333 [INFO][5156] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.335 [INFO][5156] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.339 [INFO][5156] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.339 [INFO][5156] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" host="localhost" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.342 [INFO][5156] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1 Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.347 [INFO][5156] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" host="localhost" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.356 [INFO][5156] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" host="localhost" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.356 [INFO][5156] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" host="localhost" Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.356 [INFO][5156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:16.402829 containerd[1432]: 2025-07-11 00:05:16.356 [INFO][5156] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" HandleID="k8s-pod-network.eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:16.403366 containerd[1432]: 2025-07-11 00:05:16.359 [INFO][5119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Namespace="calico-system" Pod="csi-node-driver-8t882" WorkloadEndpoint="localhost-k8s-csi--node--driver--8t882-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8t882-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69b81cc5-c8b0-45b3-aeb9-88292bebdc48", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8t882", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid806a331429", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:16.403366 containerd[1432]: 2025-07-11 00:05:16.359 [INFO][5119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Namespace="calico-system" Pod="csi-node-driver-8t882" WorkloadEndpoint="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:16.403366 containerd[1432]: 2025-07-11 00:05:16.359 [INFO][5119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid806a331429 ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Namespace="calico-system" Pod="csi-node-driver-8t882" WorkloadEndpoint="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:16.403366 containerd[1432]: 2025-07-11 00:05:16.380 [INFO][5119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Namespace="calico-system" Pod="csi-node-driver-8t882" WorkloadEndpoint="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:16.403366 containerd[1432]: 2025-07-11 00:05:16.380 [INFO][5119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Namespace="calico-system" Pod="csi-node-driver-8t882" WorkloadEndpoint="localhost-k8s-csi--node--driver--8t882-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8t882-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69b81cc5-c8b0-45b3-aeb9-88292bebdc48", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1", Pod:"csi-node-driver-8t882", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid806a331429", MAC:"3a:b1:17:11:76:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:16.403366 containerd[1432]: 2025-07-11 00:05:16.399 [INFO][5119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1" Namespace="calico-system" Pod="csi-node-driver-8t882" WorkloadEndpoint="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:16.432544 containerd[1432]: time="2025-07-11T00:05:16.432407705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:05:16.433099 containerd[1432]: time="2025-07-11T00:05:16.432520148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:05:16.433099 containerd[1432]: time="2025-07-11T00:05:16.432536868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:16.433099 containerd[1432]: time="2025-07-11T00:05:16.432643152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:16.456047 systemd[1]: Started cri-containerd-eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1.scope - libcontainer container eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1. Jul 11 00:05:16.476233 systemd-networkd[1371]: cali70533c489a4: Link UP Jul 11 00:05:16.477214 systemd-networkd[1371]: cali70533c489a4: Gained carrier Jul 11 00:05:16.501565 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.175 [INFO][5134] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0 calico-apiserver-567bf94b46- calico-apiserver d36dbd41-8dff-4eb6-a8f2-69b019debb74 1097 0 2025-07-11 00:04:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567bf94b46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-567bf94b46-mxmnt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali70533c489a4 [] [] }} ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-mxmnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.175 [INFO][5134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-mxmnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.222 [INFO][5167] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" HandleID="k8s-pod-network.060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.222 [INFO][5167] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" HandleID="k8s-pod-network.060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a37d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-567bf94b46-mxmnt", "timestamp":"2025-07-11 00:05:16.222310995 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.222 [INFO][5167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.356 [INFO][5167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.358 [INFO][5167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.409 [INFO][5167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" host="localhost" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.424 [INFO][5167] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.434 [INFO][5167] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.438 [INFO][5167] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.445 [INFO][5167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.445 [INFO][5167] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" host="localhost" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.448 [INFO][5167] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6 Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.460 [INFO][5167] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" host="localhost" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.469 [INFO][5167] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" host="localhost" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.470 [INFO][5167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" host="localhost" Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.470 [INFO][5167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:16.502363 containerd[1432]: 2025-07-11 00:05:16.470 [INFO][5167] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" HandleID="k8s-pod-network.060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:16.502979 containerd[1432]: 2025-07-11 00:05:16.473 [INFO][5134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-mxmnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0", GenerateName:"calico-apiserver-567bf94b46-", Namespace:"calico-apiserver", SelfLink:"", UID:"d36dbd41-8dff-4eb6-a8f2-69b019debb74", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567bf94b46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-567bf94b46-mxmnt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70533c489a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:16.502979 containerd[1432]: 2025-07-11 00:05:16.473 [INFO][5134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-mxmnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:16.502979 containerd[1432]: 2025-07-11 00:05:16.473 [INFO][5134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70533c489a4 ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-mxmnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:16.502979 containerd[1432]: 2025-07-11 00:05:16.476 [INFO][5134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-mxmnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:16.502979 containerd[1432]: 2025-07-11 00:05:16.484 [INFO][5134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-mxmnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0", GenerateName:"calico-apiserver-567bf94b46-", Namespace:"calico-apiserver", SelfLink:"", UID:"d36dbd41-8dff-4eb6-a8f2-69b019debb74", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567bf94b46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6", Pod:"calico-apiserver-567bf94b46-mxmnt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70533c489a4", MAC:"a2:7a:a5:a9:2e:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:16.502979 containerd[1432]: 2025-07-11 00:05:16.497 [INFO][5134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-mxmnt" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:16.526906 containerd[1432]: time="2025-07-11T00:05:16.526690118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8t882,Uid:69b81cc5-c8b0-45b3-aeb9-88292bebdc48,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1\"" Jul 11 00:05:16.531475 containerd[1432]: time="2025-07-11T00:05:16.531121050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:05:16.531475 containerd[1432]: time="2025-07-11T00:05:16.531194892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:05:16.531475 containerd[1432]: time="2025-07-11T00:05:16.531206773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:16.531475 containerd[1432]: time="2025-07-11T00:05:16.531306216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:16.550039 systemd[1]: Started cri-containerd-060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6.scope - libcontainer container 060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6. Jul 11 00:05:16.561493 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:05:16.588269 containerd[1432]: time="2025-07-11T00:05:16.588215794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567bf94b46-mxmnt,Uid:d36dbd41-8dff-4eb6-a8f2-69b019debb74,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6\"" Jul 11 00:05:16.592865 containerd[1432]: time="2025-07-11T00:05:16.592757889Z" level=info msg="CreateContainer within sandbox \"060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:05:16.616161 containerd[1432]: time="2025-07-11T00:05:16.616108186Z" level=info msg="CreateContainer within sandbox \"060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c5b356dd2fea2942ae5a0a0a3ea83c6c7354765c926fd600f7d1541fb270baaf\"" Jul 11 00:05:16.618415 containerd[1432]: time="2025-07-11T00:05:16.617007053Z" level=info msg="StartContainer for \"c5b356dd2fea2942ae5a0a0a3ea83c6c7354765c926fd600f7d1541fb270baaf\"" Jul 11 00:05:16.661088 systemd[1]: Started cri-containerd-c5b356dd2fea2942ae5a0a0a3ea83c6c7354765c926fd600f7d1541fb270baaf.scope - libcontainer container c5b356dd2fea2942ae5a0a0a3ea83c6c7354765c926fd600f7d1541fb270baaf. Jul 11 00:05:16.715528 containerd[1432]: time="2025-07-11T00:05:16.715367308Z" level=info msg="StartContainer for \"c5b356dd2fea2942ae5a0a0a3ea83c6c7354765c926fd600f7d1541fb270baaf\" returns successfully" Jul 11 00:05:17.023499 systemd[1]: run-netns-cni\x2dcded8476\x2d1eb4\x2de068\x2d0b76\x2df9cd53efe278.mount: Deactivated successfully. Jul 11 00:05:17.087194 kubelet[2472]: I0711 00:05:17.087155 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:05:17.087194 kubelet[2472]: I0711 00:05:17.087160 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:05:17.091029 kubelet[2472]: E0711 00:05:17.087681 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:17.103947 kubelet[2472]: I0711 00:05:17.103377 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-567bf94b46-mxmnt" podStartSLOduration=29.103354136 podStartE2EDuration="29.103354136s" podCreationTimestamp="2025-07-11 00:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:05:17.100872944 +0000 UTC m=+45.313677690" watchObservedRunningTime="2025-07-11 00:05:17.103354136 +0000 UTC m=+45.316158922" Jul 11 00:05:17.517177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2062709061.mount: Deactivated successfully. Jul 11 00:05:17.728012 systemd-networkd[1371]: calif99ae2c31c8: Gained IPv6LL Jul 11 00:05:17.792611 systemd-networkd[1371]: cali70533c489a4: Gained IPv6LL Jul 11 00:05:18.090696 kubelet[2472]: I0711 00:05:18.090656 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:05:18.108657 containerd[1432]: time="2025-07-11T00:05:18.108600586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:18.110125 containerd[1432]: time="2025-07-11T00:05:18.110097829Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 11 00:05:18.111027 containerd[1432]: time="2025-07-11T00:05:18.110981014Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:18.113598 containerd[1432]: time="2025-07-11T00:05:18.113513206Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:18.114476 containerd[1432]: time="2025-07-11T00:05:18.114344910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.533184469s" Jul 11 00:05:18.114476 containerd[1432]: time="2025-07-11T00:05:18.114381631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 11 00:05:18.115589 containerd[1432]: time="2025-07-11T00:05:18.115509183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 00:05:18.116687 containerd[1432]: time="2025-07-11T00:05:18.116564173Z" level=info msg="CreateContainer within sandbox \"435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 00:05:18.136798 kubelet[2472]: I0711 00:05:18.136661 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:05:18.161182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1121505087.mount: Deactivated successfully. Jul 11 00:05:18.169249 containerd[1432]: time="2025-07-11T00:05:18.168724701Z" level=info msg="CreateContainer within sandbox \"435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"346ab380e64402026f0568657ee2e99b15e0479464ab19c7703b52edbdbd1b92\"" Jul 11 00:05:18.169571 containerd[1432]: time="2025-07-11T00:05:18.169533485Z" level=info msg="StartContainer for \"346ab380e64402026f0568657ee2e99b15e0479464ab19c7703b52edbdbd1b92\"" Jul 11 00:05:18.217025 systemd[1]: Started cri-containerd-346ab380e64402026f0568657ee2e99b15e0479464ab19c7703b52edbdbd1b92.scope - libcontainer container 346ab380e64402026f0568657ee2e99b15e0479464ab19c7703b52edbdbd1b92. Jul 11 00:05:18.273516 containerd[1432]: time="2025-07-11T00:05:18.273412768Z" level=info msg="StartContainer for \"346ab380e64402026f0568657ee2e99b15e0479464ab19c7703b52edbdbd1b92\" returns successfully" Jul 11 00:05:18.432044 systemd-networkd[1371]: calid806a331429: Gained IPv6LL Jul 11 00:05:19.059284 kubelet[2472]: I0711 00:05:19.059019 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:05:19.065022 containerd[1432]: time="2025-07-11T00:05:19.063500194Z" level=info msg="StopContainer for \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\" with timeout 30 (s)" Jul 11 00:05:19.065022 containerd[1432]: time="2025-07-11T00:05:19.064124011Z" level=info msg="Stop container \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\" with signal terminated" Jul 11 00:05:19.089380 systemd[1]: cri-containerd-8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3.scope: Deactivated successfully. Jul 11 00:05:19.089880 systemd[1]: cri-containerd-8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3.scope: Consumed 1.166s CPU time. Jul 11 00:05:19.109461 systemd[1]: Created slice kubepods-besteffort-pod58af7944_d003_404d_b6b4_511bf5df2ada.slice - libcontainer container kubepods-besteffort-pod58af7944_d003_404d_b6b4_511bf5df2ada.slice. Jul 11 00:05:19.136256 kubelet[2472]: I0711 00:05:19.135994 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58af7944-d003-404d-b6b4-511bf5df2ada-calico-apiserver-certs\") pod \"calico-apiserver-567bf94b46-fpvv4\" (UID: \"58af7944-d003-404d-b6b4-511bf5df2ada\") " pod="calico-apiserver/calico-apiserver-567bf94b46-fpvv4" Jul 11 00:05:19.136256 kubelet[2472]: I0711 00:05:19.136160 2472 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q9k5\" (UniqueName: \"kubernetes.io/projected/58af7944-d003-404d-b6b4-511bf5df2ada-kube-api-access-8q9k5\") pod \"calico-apiserver-567bf94b46-fpvv4\" (UID: \"58af7944-d003-404d-b6b4-511bf5df2ada\") " pod="calico-apiserver/calico-apiserver-567bf94b46-fpvv4" Jul 11 00:05:19.197667 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3-rootfs.mount: Deactivated successfully. Jul 11 00:05:19.236422 containerd[1432]: time="2025-07-11T00:05:19.232203027Z" level=info msg="shim disconnected" id=8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3 namespace=k8s.io Jul 11 00:05:19.236422 containerd[1432]: time="2025-07-11T00:05:19.236251260Z" level=warning msg="cleaning up after shim disconnected" id=8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3 namespace=k8s.io Jul 11 00:05:19.236422 containerd[1432]: time="2025-07-11T00:05:19.236267981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:05:19.269842 containerd[1432]: time="2025-07-11T00:05:19.269478629Z" level=info msg="StopContainer for \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\" returns successfully" Jul 11 00:05:19.271100 containerd[1432]: time="2025-07-11T00:05:19.271066313Z" level=info msg="StopPodSandbox for \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\"" Jul 11 00:05:19.271182 containerd[1432]: time="2025-07-11T00:05:19.271115634Z" level=info msg="Container to stop \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:05:19.275404 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df-shm.mount: Deactivated successfully. Jul 11 00:05:19.279739 systemd[1]: cri-containerd-bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df.scope: Deactivated successfully. Jul 11 00:05:19.311689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df-rootfs.mount: Deactivated successfully. Jul 11 00:05:19.333629 containerd[1432]: time="2025-07-11T00:05:19.333397774Z" level=info msg="shim disconnected" id=bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df namespace=k8s.io Jul 11 00:05:19.333629 containerd[1432]: time="2025-07-11T00:05:19.333458416Z" level=warning msg="cleaning up after shim disconnected" id=bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df namespace=k8s.io Jul 11 00:05:19.333629 containerd[1432]: time="2025-07-11T00:05:19.333467936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:05:19.417951 containerd[1432]: time="2025-07-11T00:05:19.417892135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567bf94b46-fpvv4,Uid:58af7944-d003-404d-b6b4-511bf5df2ada,Namespace:calico-apiserver,Attempt:0,}" Jul 11 00:05:19.434299 kubelet[2472]: I0711 00:05:19.434198 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-ts85z" podStartSLOduration=23.809582537 podStartE2EDuration="27.43418063s" podCreationTimestamp="2025-07-11 00:04:52 +0000 UTC" firstStartedPulling="2025-07-11 00:05:14.490758926 +0000 UTC m=+42.703563672" lastFinishedPulling="2025-07-11 00:05:18.115357019 +0000 UTC m=+46.328161765" observedRunningTime="2025-07-11 00:05:19.123459989 +0000 UTC m=+47.336264695" watchObservedRunningTime="2025-07-11 00:05:19.43418063 +0000 UTC m=+47.646985376" Jul 11 00:05:19.436603 systemd-networkd[1371]: cali60fd2e7552d: Link DOWN Jul 11 00:05:19.436610 systemd-networkd[1371]: cali60fd2e7552d: Lost carrier Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.435 [INFO][5505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.435 [INFO][5505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" iface="eth0" netns="/var/run/netns/cni-f6d353c2-3aec-34e6-c570-22a4db16bf15" Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.435 [INFO][5505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" iface="eth0" netns="/var/run/netns/cni-f6d353c2-3aec-34e6-c570-22a4db16bf15" Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.447 [INFO][5505] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" after=12.053497ms iface="eth0" netns="/var/run/netns/cni-f6d353c2-3aec-34e6-c570-22a4db16bf15" Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.447 [INFO][5505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.447 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.521 [INFO][5522] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.521 [INFO][5522] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.521 [INFO][5522] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.580 [INFO][5522] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.580 [INFO][5522] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.583 [INFO][5522] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:19.587801 containerd[1432]: 2025-07-11 00:05:19.585 [INFO][5505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:19.589012 containerd[1432]: time="2025-07-11T00:05:19.588791310Z" level=info msg="TearDown network for sandbox \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\" successfully" Jul 11 00:05:19.589012 containerd[1432]: time="2025-07-11T00:05:19.588824311Z" level=info msg="StopPodSandbox for \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\" returns successfully" Jul 11 00:05:19.589580 containerd[1432]: time="2025-07-11T00:05:19.589544531Z" level=info msg="StopPodSandbox for \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\"" Jul 11 00:05:19.651344 systemd-networkd[1371]: cali2ffa6565d3c: Link UP Jul 11 00:05:19.652992 systemd-networkd[1371]: cali2ffa6565d3c: Gained carrier Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.541 [INFO][5527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0 calico-apiserver-567bf94b46- calico-apiserver 58af7944-d003-404d-b6b4-511bf5df2ada 1175 0 2025-07-11 00:05:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:567bf94b46 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-567bf94b46-fpvv4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2ffa6565d3c [] [] }} ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-fpvv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.541 [INFO][5527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-fpvv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.584 [INFO][5546] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" HandleID="k8s-pod-network.681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Workload="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.584 [INFO][5546] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" HandleID="k8s-pod-network.681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Workload="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-567bf94b46-fpvv4", "timestamp":"2025-07-11 00:05:19.584724076 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.584 [INFO][5546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.585 [INFO][5546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.585 [INFO][5546] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.597 [INFO][5546] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" host="localhost" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.603 [INFO][5546] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.613 [INFO][5546] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.615 [INFO][5546] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.618 [INFO][5546] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.618 [INFO][5546] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" host="localhost" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.620 [INFO][5546] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18 Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.628 [INFO][5546] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" host="localhost" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.644 [INFO][5546] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" host="localhost" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.644 [INFO][5546] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" host="localhost" Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.644 [INFO][5546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:19.674210 containerd[1432]: 2025-07-11 00:05:19.644 [INFO][5546] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" HandleID="k8s-pod-network.681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Workload="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0" Jul 11 00:05:19.675251 containerd[1432]: 2025-07-11 00:05:19.647 [INFO][5527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-fpvv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0", GenerateName:"calico-apiserver-567bf94b46-", Namespace:"calico-apiserver", SelfLink:"", UID:"58af7944-d003-404d-b6b4-511bf5df2ada", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567bf94b46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-567bf94b46-fpvv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ffa6565d3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:19.675251 containerd[1432]: 2025-07-11 00:05:19.647 [INFO][5527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-fpvv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0" Jul 11 00:05:19.675251 containerd[1432]: 2025-07-11 00:05:19.647 [INFO][5527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ffa6565d3c ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-fpvv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0" Jul 11 00:05:19.675251 containerd[1432]: 2025-07-11 00:05:19.653 [INFO][5527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-fpvv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0" Jul 11 00:05:19.675251 containerd[1432]: 2025-07-11 00:05:19.655 [INFO][5527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-fpvv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0", GenerateName:"calico-apiserver-567bf94b46-", Namespace:"calico-apiserver", SelfLink:"", UID:"58af7944-d003-404d-b6b4-511bf5df2ada", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 5, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567bf94b46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18", Pod:"calico-apiserver-567bf94b46-fpvv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2ffa6565d3c", MAC:"a2:f8:21:71:f0:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:19.675251 containerd[1432]: 2025-07-11 00:05:19.670 [INFO][5527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18" Namespace="calico-apiserver" Pod="calico-apiserver-567bf94b46-fpvv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--567bf94b46--fpvv4-eth0" Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.647 [WARNING][5564] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0", GenerateName:"calico-apiserver-bcf45dd9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbf96476-6021-46fe-b842-fd862c7996c8", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcf45dd9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df", Pod:"calico-apiserver-bcf45dd9c-dssv7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali60fd2e7552d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.647 [INFO][5564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.647 [INFO][5564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" iface="eth0" netns="" Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.647 [INFO][5564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.647 [INFO][5564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.682 [INFO][5573] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.682 [INFO][5573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.682 [INFO][5573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.696 [WARNING][5573] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.696 [INFO][5573] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.698 [INFO][5573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:19.702753 containerd[1432]: 2025-07-11 00:05:19.700 [INFO][5564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:19.702753 containerd[1432]: time="2025-07-11T00:05:19.702636450Z" level=info msg="TearDown network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\" successfully" Jul 11 00:05:19.702753 containerd[1432]: time="2025-07-11T00:05:19.702661491Z" level=info msg="StopPodSandbox for \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\" returns successfully" Jul 11 00:05:19.716967 containerd[1432]: time="2025-07-11T00:05:19.716367954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 11 00:05:19.718684 containerd[1432]: time="2025-07-11T00:05:19.716933530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 11 00:05:19.718684 containerd[1432]: time="2025-07-11T00:05:19.718320649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:19.718684 containerd[1432]: time="2025-07-11T00:05:19.718517774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 11 00:05:19.755072 systemd[1]: Started cri-containerd-681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18.scope - libcontainer container 681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18. Jul 11 00:05:19.798360 systemd-resolved[1302]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 00:05:19.840939 containerd[1432]: time="2025-07-11T00:05:19.840678627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-567bf94b46-fpvv4,Uid:58af7944-d003-404d-b6b4-511bf5df2ada,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18\"" Jul 11 00:05:19.842276 kubelet[2472]: I0711 00:05:19.842008 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l8kd\" (UniqueName: \"kubernetes.io/projected/dbf96476-6021-46fe-b842-fd862c7996c8-kube-api-access-4l8kd\") pod \"dbf96476-6021-46fe-b842-fd862c7996c8\" (UID: \"dbf96476-6021-46fe-b842-fd862c7996c8\") " Jul 11 00:05:19.842276 kubelet[2472]: I0711 00:05:19.842050 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbf96476-6021-46fe-b842-fd862c7996c8-calico-apiserver-certs\") pod \"dbf96476-6021-46fe-b842-fd862c7996c8\" (UID: \"dbf96476-6021-46fe-b842-fd862c7996c8\") " Jul 11 00:05:19.849445 kubelet[2472]: I0711 00:05:19.849006 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf96476-6021-46fe-b842-fd862c7996c8-kube-api-access-4l8kd" (OuterVolumeSpecName: "kube-api-access-4l8kd") pod "dbf96476-6021-46fe-b842-fd862c7996c8" (UID: "dbf96476-6021-46fe-b842-fd862c7996c8"). InnerVolumeSpecName "kube-api-access-4l8kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:05:19.850571 kubelet[2472]: I0711 00:05:19.850456 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf96476-6021-46fe-b842-fd862c7996c8-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "dbf96476-6021-46fe-b842-fd862c7996c8" (UID: "dbf96476-6021-46fe-b842-fd862c7996c8"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:05:19.852234 containerd[1432]: time="2025-07-11T00:05:19.852049345Z" level=info msg="CreateContainer within sandbox \"681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 00:05:19.868297 containerd[1432]: time="2025-07-11T00:05:19.868195156Z" level=info msg="CreateContainer within sandbox \"681fe2bf5f1c59fd9b83e85b27576347327624edb6bbe5572ba622b3ab527c18\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c231293461fcdb3b2403479e049ed4d54372de9fc648c539356e0a7ec2927111\"" Jul 11 00:05:19.868995 containerd[1432]: time="2025-07-11T00:05:19.868707050Z" level=info msg="StartContainer for \"c231293461fcdb3b2403479e049ed4d54372de9fc648c539356e0a7ec2927111\"" Jul 11 00:05:19.903059 systemd[1]: Started cri-containerd-c231293461fcdb3b2403479e049ed4d54372de9fc648c539356e0a7ec2927111.scope - libcontainer container c231293461fcdb3b2403479e049ed4d54372de9fc648c539356e0a7ec2927111. Jul 11 00:05:19.904693 systemd[1]: Removed slice kubepods-besteffort-poddbf96476_6021_46fe_b842_fd862c7996c8.slice - libcontainer container kubepods-besteffort-poddbf96476_6021_46fe_b842_fd862c7996c8.slice. Jul 11 00:05:19.904948 systemd[1]: kubepods-besteffort-poddbf96476_6021_46fe_b842_fd862c7996c8.slice: Consumed 1.183s CPU time. Jul 11 00:05:19.944325 kubelet[2472]: I0711 00:05:19.944287 2472 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbf96476-6021-46fe-b842-fd862c7996c8-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 11 00:05:19.944325 kubelet[2472]: I0711 00:05:19.944320 2472 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l8kd\" (UniqueName: \"kubernetes.io/projected/dbf96476-6021-46fe-b842-fd862c7996c8-kube-api-access-4l8kd\") on node \"localhost\" DevicePath \"\"" Jul 11 00:05:19.983838 containerd[1432]: time="2025-07-11T00:05:19.983629221Z" level=info msg="StartContainer for \"c231293461fcdb3b2403479e049ed4d54372de9fc648c539356e0a7ec2927111\" returns successfully" Jul 11 00:05:20.109121 kubelet[2472]: I0711 00:05:20.108059 2472 scope.go:117] "RemoveContainer" containerID="8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3" Jul 11 00:05:20.113355 containerd[1432]: time="2025-07-11T00:05:20.113267820Z" level=info msg="RemoveContainer for \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\"" Jul 11 00:05:20.129803 containerd[1432]: time="2025-07-11T00:05:20.129703070Z" level=info msg="RemoveContainer for \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\" returns successfully" Jul 11 00:05:20.132164 kubelet[2472]: I0711 00:05:20.132065 2472 scope.go:117] "RemoveContainer" containerID="8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3" Jul 11 00:05:20.133023 containerd[1432]: time="2025-07-11T00:05:20.132700152Z" level=error msg="ContainerStatus for \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\": not found" Jul 11 00:05:20.134794 kubelet[2472]: E0711 00:05:20.134763 2472 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\": not found" containerID="8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3" Jul 11 00:05:20.134920 kubelet[2472]: I0711 00:05:20.134805 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3"} err="failed to get container status \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"8db8aa05ce3231c518d89fb5f2b93fdd54e601ed767359a9ff9b1cc9557329c3\": not found" Jul 11 00:05:20.195498 systemd[1]: run-netns-cni\x2df6d353c2\x2d3aec\x2d34e6\x2dc570\x2d22a4db16bf15.mount: Deactivated successfully. Jul 11 00:05:20.195621 systemd[1]: var-lib-kubelet-pods-dbf96476\x2d6021\x2d46fe\x2db842\x2dfd862c7996c8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4l8kd.mount: Deactivated successfully. Jul 11 00:05:20.195683 systemd[1]: var-lib-kubelet-pods-dbf96476\x2d6021\x2d46fe\x2db842\x2dfd862c7996c8-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 11 00:05:20.342479 containerd[1432]: time="2025-07-11T00:05:20.342433415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:20.343647 containerd[1432]: time="2025-07-11T00:05:20.343614008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 11 00:05:20.344757 containerd[1432]: time="2025-07-11T00:05:20.344717718Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:20.346784 containerd[1432]: time="2025-07-11T00:05:20.346745054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:20.347901 containerd[1432]: time="2025-07-11T00:05:20.347843804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.232302619s" Jul 11 00:05:20.347955 containerd[1432]: time="2025-07-11T00:05:20.347901685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 11 00:05:20.349130 containerd[1432]: time="2025-07-11T00:05:20.349095838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 00:05:20.360097 containerd[1432]: time="2025-07-11T00:05:20.359991896Z" level=info msg="CreateContainer within sandbox \"2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 00:05:20.371827 containerd[1432]: time="2025-07-11T00:05:20.371772299Z" level=info msg="CreateContainer within sandbox \"2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"48bac4592f49c2c946a6ee55506f7e096a8206adaad158ce7e3da8bdf739c133\"" Jul 11 00:05:20.372585 containerd[1432]: time="2025-07-11T00:05:20.372561840Z" level=info msg="StartContainer for \"48bac4592f49c2c946a6ee55506f7e096a8206adaad158ce7e3da8bdf739c133\"" Jul 11 00:05:20.403073 systemd[1]: Started cri-containerd-48bac4592f49c2c946a6ee55506f7e096a8206adaad158ce7e3da8bdf739c133.scope - libcontainer container 48bac4592f49c2c946a6ee55506f7e096a8206adaad158ce7e3da8bdf739c133. Jul 11 00:05:20.440127 containerd[1432]: time="2025-07-11T00:05:20.439212745Z" level=info msg="StartContainer for \"48bac4592f49c2c946a6ee55506f7e096a8206adaad158ce7e3da8bdf739c133\" returns successfully" Jul 11 00:05:20.701308 systemd[1]: Started sshd@8-10.0.0.27:22-10.0.0.1:54390.service - OpenSSH per-connection server daemon (10.0.0.1:54390). Jul 11 00:05:20.768990 sshd[5749]: Accepted publickey for core from 10.0.0.1 port 54390 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:20.774664 sshd[5749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:20.780834 systemd-logind[1416]: New session 9 of user core. Jul 11 00:05:20.786091 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 00:05:21.120823 kubelet[2472]: I0711 00:05:21.120612 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:05:21.136090 kubelet[2472]: I0711 00:05:21.136023 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-567bf94b46-fpvv4" podStartSLOduration=2.136001795 podStartE2EDuration="2.136001795s" podCreationTimestamp="2025-07-11 00:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 00:05:20.146315165 +0000 UTC m=+48.359119951" watchObservedRunningTime="2025-07-11 00:05:21.136001795 +0000 UTC m=+49.348806541" Jul 11 00:05:21.291609 sshd[5749]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:21.296023 systemd[1]: sshd@8-10.0.0.27:22-10.0.0.1:54390.service: Deactivated successfully. Jul 11 00:05:21.298647 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 00:05:21.299643 systemd-logind[1416]: Session 9 logged out. Waiting for processes to exit. Jul 11 00:05:21.301761 systemd-logind[1416]: Removed session 9. Jul 11 00:05:21.501311 containerd[1432]: time="2025-07-11T00:05:21.501184964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:21.501920 containerd[1432]: time="2025-07-11T00:05:21.501888543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 11 00:05:21.503224 containerd[1432]: time="2025-07-11T00:05:21.502902690Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:21.505461 containerd[1432]: time="2025-07-11T00:05:21.505428718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:21.506032 containerd[1432]: time="2025-07-11T00:05:21.505989533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.156860414s" Jul 11 00:05:21.506032 containerd[1432]: time="2025-07-11T00:05:21.506028694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 11 00:05:21.508913 containerd[1432]: time="2025-07-11T00:05:21.508463959Z" level=info msg="CreateContainer within sandbox \"eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 00:05:21.522598 containerd[1432]: time="2025-07-11T00:05:21.522545137Z" level=info msg="CreateContainer within sandbox \"eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a056bda2e96f38047a480582ca147c8f40b6facd6ae0e54b659d273f388afa2c\"" Jul 11 00:05:21.523211 containerd[1432]: time="2025-07-11T00:05:21.523189635Z" level=info msg="StartContainer for \"a056bda2e96f38047a480582ca147c8f40b6facd6ae0e54b659d273f388afa2c\"" Jul 11 00:05:21.558040 systemd[1]: Started cri-containerd-a056bda2e96f38047a480582ca147c8f40b6facd6ae0e54b659d273f388afa2c.scope - libcontainer container a056bda2e96f38047a480582ca147c8f40b6facd6ae0e54b659d273f388afa2c. Jul 11 00:05:21.568724 systemd-networkd[1371]: cali2ffa6565d3c: Gained IPv6LL Jul 11 00:05:21.587277 containerd[1432]: time="2025-07-11T00:05:21.587226995Z" level=info msg="StartContainer for \"a056bda2e96f38047a480582ca147c8f40b6facd6ae0e54b659d273f388afa2c\" returns successfully" Jul 11 00:05:21.590224 containerd[1432]: time="2025-07-11T00:05:21.590110592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 00:05:21.887233 kubelet[2472]: I0711 00:05:21.886952 2472 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf96476-6021-46fe-b842-fd862c7996c8" path="/var/lib/kubelet/pods/dbf96476-6021-46fe-b842-fd862c7996c8/volumes" Jul 11 00:05:22.124519 kubelet[2472]: I0711 00:05:22.124486 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:05:22.957285 containerd[1432]: time="2025-07-11T00:05:22.957238087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:22.957968 containerd[1432]: time="2025-07-11T00:05:22.957881304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 11 00:05:22.959053 containerd[1432]: time="2025-07-11T00:05:22.959011974Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:22.965883 containerd[1432]: time="2025-07-11T00:05:22.965447064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 00:05:22.966068 containerd[1432]: time="2025-07-11T00:05:22.966039639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.375889446s" Jul 11 00:05:22.966115 containerd[1432]: time="2025-07-11T00:05:22.966074520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 11 00:05:22.968351 containerd[1432]: time="2025-07-11T00:05:22.968320539Z" level=info msg="CreateContainer within sandbox \"eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 00:05:22.984251 containerd[1432]: time="2025-07-11T00:05:22.983815228Z" level=info msg="CreateContainer within sandbox \"eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f73f3edd70e710cdbaa0cac8be5f447281347cddc6f4e8c11880ab8620151156\"" Jul 11 00:05:22.985310 containerd[1432]: time="2025-07-11T00:05:22.985265386Z" level=info msg="StartContainer for \"f73f3edd70e710cdbaa0cac8be5f447281347cddc6f4e8c11880ab8620151156\"" Jul 11 00:05:23.025050 systemd[1]: Started cri-containerd-f73f3edd70e710cdbaa0cac8be5f447281347cddc6f4e8c11880ab8620151156.scope - libcontainer container f73f3edd70e710cdbaa0cac8be5f447281347cddc6f4e8c11880ab8620151156. Jul 11 00:05:23.050104 containerd[1432]: time="2025-07-11T00:05:23.049996511Z" level=info msg="StartContainer for \"f73f3edd70e710cdbaa0cac8be5f447281347cddc6f4e8c11880ab8620151156\" returns successfully" Jul 11 00:05:23.142098 kubelet[2472]: I0711 00:05:23.141966 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8t882" podStartSLOduration=24.707847653 podStartE2EDuration="31.141947294s" podCreationTimestamp="2025-07-11 00:04:52 +0000 UTC" firstStartedPulling="2025-07-11 00:05:16.533032827 +0000 UTC m=+44.745837573" lastFinishedPulling="2025-07-11 00:05:22.967132468 +0000 UTC m=+51.179937214" observedRunningTime="2025-07-11 00:05:23.141004109 +0000 UTC m=+51.353808895" watchObservedRunningTime="2025-07-11 00:05:23.141947294 +0000 UTC m=+51.354752040" Jul 11 00:05:23.142503 kubelet[2472]: I0711 00:05:23.142142 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7688b6944f-z5wj7" podStartSLOduration=26.195412268 podStartE2EDuration="30.142136178s" podCreationTimestamp="2025-07-11 00:04:53 +0000 UTC" firstStartedPulling="2025-07-11 00:05:16.402232724 +0000 UTC m=+44.615037470" lastFinishedPulling="2025-07-11 00:05:20.348956674 +0000 UTC m=+48.561761380" observedRunningTime="2025-07-11 00:05:21.13507945 +0000 UTC m=+49.347884196" watchObservedRunningTime="2025-07-11 00:05:23.142136178 +0000 UTC m=+51.354940924" Jul 11 00:05:23.965979 kubelet[2472]: I0711 00:05:23.965941 2472 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 00:05:23.968654 kubelet[2472]: I0711 00:05:23.968424 2472 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 00:05:26.303701 systemd[1]: Started sshd@9-10.0.0.27:22-10.0.0.1:46484.service - OpenSSH per-connection server daemon (10.0.0.1:46484). Jul 11 00:05:26.355387 sshd[5902]: Accepted publickey for core from 10.0.0.1 port 46484 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:26.357187 sshd[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:26.369387 systemd-logind[1416]: New session 10 of user core. Jul 11 00:05:26.378043 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 00:05:26.635796 sshd[5902]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:26.649132 systemd[1]: sshd@9-10.0.0.27:22-10.0.0.1:46484.service: Deactivated successfully. Jul 11 00:05:26.651079 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 00:05:26.653181 systemd-logind[1416]: Session 10 logged out. Waiting for processes to exit. Jul 11 00:05:26.659197 systemd[1]: Started sshd@10-10.0.0.27:22-10.0.0.1:46486.service - OpenSSH per-connection server daemon (10.0.0.1:46486). Jul 11 00:05:26.661864 systemd-logind[1416]: Removed session 10. Jul 11 00:05:26.700797 sshd[5917]: Accepted publickey for core from 10.0.0.1 port 46486 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:26.702456 sshd[5917]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:26.710866 systemd-logind[1416]: New session 11 of user core. Jul 11 00:05:26.719103 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 00:05:26.964992 sshd[5917]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:26.971005 systemd[1]: sshd@10-10.0.0.27:22-10.0.0.1:46486.service: Deactivated successfully. Jul 11 00:05:26.972784 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 00:05:26.974431 systemd-logind[1416]: Session 11 logged out. Waiting for processes to exit. Jul 11 00:05:26.984154 systemd[1]: Started sshd@11-10.0.0.27:22-10.0.0.1:46492.service - OpenSSH per-connection server daemon (10.0.0.1:46492). Jul 11 00:05:26.988063 systemd-logind[1416]: Removed session 11. Jul 11 00:05:27.023414 sshd[5935]: Accepted publickey for core from 10.0.0.1 port 46492 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:27.024751 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:27.028603 systemd-logind[1416]: New session 12 of user core. Jul 11 00:05:27.036531 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 00:05:27.043683 kubelet[2472]: I0711 00:05:27.043638 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:05:27.104815 systemd[1]: run-containerd-runc-k8s.io-48bac4592f49c2c946a6ee55506f7e096a8206adaad158ce7e3da8bdf739c133-runc.ybItCG.mount: Deactivated successfully. Jul 11 00:05:27.218909 sshd[5935]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:27.223859 systemd[1]: sshd@11-10.0.0.27:22-10.0.0.1:46492.service: Deactivated successfully. Jul 11 00:05:27.226055 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 00:05:27.227012 systemd-logind[1416]: Session 12 logged out. Waiting for processes to exit. Jul 11 00:05:27.227842 systemd-logind[1416]: Removed session 12. Jul 11 00:05:31.867593 containerd[1432]: time="2025-07-11T00:05:31.867398198Z" level=info msg="StopPodSandbox for \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\"" Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.925 [WARNING][6006] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ts85z-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"1a8f323e-1163-4dff-a9f9-a0583b72a07e", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f", Pod:"goldmane-58fd7646b9-ts85z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa2fc2be6a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.925 [INFO][6006] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.925 [INFO][6006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" iface="eth0" netns="" Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.925 [INFO][6006] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.925 [INFO][6006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.957 [INFO][6017] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" HandleID="k8s-pod-network.9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.958 [INFO][6017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.958 [INFO][6017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.970 [WARNING][6017] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" HandleID="k8s-pod-network.9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.970 [INFO][6017] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" HandleID="k8s-pod-network.9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.977 [INFO][6017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:31.980788 containerd[1432]: 2025-07-11 00:05:31.979 [INFO][6006] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:31.981503 containerd[1432]: time="2025-07-11T00:05:31.981108909Z" level=info msg="TearDown network for sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\" successfully" Jul 11 00:05:31.981503 containerd[1432]: time="2025-07-11T00:05:31.981142590Z" level=info msg="StopPodSandbox for \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\" returns successfully" Jul 11 00:05:31.982172 containerd[1432]: time="2025-07-11T00:05:31.982146813Z" level=info msg="RemovePodSandbox for \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\"" Jul 11 00:05:31.991289 containerd[1432]: time="2025-07-11T00:05:31.991245024Z" level=info msg="Forcibly stopping sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\"" Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.024 [WARNING][6035] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ts85z-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"1a8f323e-1163-4dff-a9f9-a0583b72a07e", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"435dad0172bcda8431c3fce3d07ad6be6fd04d6e0656ddc06b11d72dc01c4e9f", Pod:"goldmane-58fd7646b9-ts85z", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califa2fc2be6a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.024 [INFO][6035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.024 [INFO][6035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" iface="eth0" netns="" Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.024 [INFO][6035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.024 [INFO][6035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.043 [INFO][6044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" HandleID="k8s-pod-network.9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.043 [INFO][6044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.043 [INFO][6044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.051 [WARNING][6044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" HandleID="k8s-pod-network.9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.051 [INFO][6044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" HandleID="k8s-pod-network.9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Workload="localhost-k8s-goldmane--58fd7646b9--ts85z-eth0" Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.052 [INFO][6044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.056222 containerd[1432]: 2025-07-11 00:05:32.054 [INFO][6035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e" Jul 11 00:05:32.056703 containerd[1432]: time="2025-07-11T00:05:32.056260634Z" level=info msg="TearDown network for sandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\" successfully" Jul 11 00:05:32.071096 containerd[1432]: time="2025-07-11T00:05:32.071053933Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:05:32.071213 containerd[1432]: time="2025-07-11T00:05:32.071139735Z" level=info msg="RemovePodSandbox \"9566cc30f9304342abe8dc83394fe53245a3daffb54ef36913dd3504f0ba930e\" returns successfully" Jul 11 00:05:32.071572 containerd[1432]: time="2025-07-11T00:05:32.071548704Z" level=info msg="StopPodSandbox for \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\"" Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.107 [WARNING][6061] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0", GenerateName:"calico-kube-controllers-7688b6944f-", Namespace:"calico-system", SelfLink:"", UID:"f2557641-a3f6-4d30-9fc0-458168133b25", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7688b6944f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca", Pod:"calico-kube-controllers-7688b6944f-z5wj7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif99ae2c31c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.107 [INFO][6061] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.107 [INFO][6061] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" iface="eth0" netns="" Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.107 [INFO][6061] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.107 [INFO][6061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.128 [INFO][6069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" HandleID="k8s-pod-network.0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.128 [INFO][6069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.128 [INFO][6069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.137 [WARNING][6069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" HandleID="k8s-pod-network.0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.137 [INFO][6069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" HandleID="k8s-pod-network.0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.138 [INFO][6069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.142141 containerd[1432]: 2025-07-11 00:05:32.140 [INFO][6061] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:32.142141 containerd[1432]: time="2025-07-11T00:05:32.142122159Z" level=info msg="TearDown network for sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\" successfully" Jul 11 00:05:32.142537 containerd[1432]: time="2025-07-11T00:05:32.142145840Z" level=info msg="StopPodSandbox for \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\" returns successfully" Jul 11 00:05:32.143078 containerd[1432]: time="2025-07-11T00:05:32.142602370Z" level=info msg="RemovePodSandbox for \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\"" Jul 11 00:05:32.143078 containerd[1432]: time="2025-07-11T00:05:32.142634651Z" level=info msg="Forcibly stopping sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\"" Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.173 [WARNING][6087] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0", GenerateName:"calico-kube-controllers-7688b6944f-", Namespace:"calico-system", SelfLink:"", UID:"f2557641-a3f6-4d30-9fc0-458168133b25", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7688b6944f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2352b5994279452d25ac95643c14687e991596d7b77f95d253212bcb618b6eca", Pod:"calico-kube-controllers-7688b6944f-z5wj7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif99ae2c31c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.174 [INFO][6087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.174 [INFO][6087] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" iface="eth0" netns="" Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.174 [INFO][6087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.174 [INFO][6087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.191 [INFO][6095] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" HandleID="k8s-pod-network.0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.191 [INFO][6095] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.191 [INFO][6095] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.201 [WARNING][6095] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" HandleID="k8s-pod-network.0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.201 [INFO][6095] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" HandleID="k8s-pod-network.0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Workload="localhost-k8s-calico--kube--controllers--7688b6944f--z5wj7-eth0" Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.203 [INFO][6095] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.206518 containerd[1432]: 2025-07-11 00:05:32.205 [INFO][6087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf" Jul 11 00:05:32.207043 containerd[1432]: time="2025-07-11T00:05:32.206575754Z" level=info msg="TearDown network for sandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\" successfully" Jul 11 00:05:32.212056 containerd[1432]: time="2025-07-11T00:05:32.211993118Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:05:32.212298 containerd[1432]: time="2025-07-11T00:05:32.212264404Z" level=info msg="RemovePodSandbox \"0140b2576bd4d50c896cd16d20420e4b07281d6151fc49ed1d0bbfbf37cd0ecf\" returns successfully" Jul 11 00:05:32.212760 containerd[1432]: time="2025-07-11T00:05:32.212701934Z" level=info msg="StopPodSandbox for \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\"" Jul 11 00:05:32.233731 systemd[1]: Started sshd@12-10.0.0.27:22-10.0.0.1:46496.service - OpenSSH per-connection server daemon (10.0.0.1:46496). Jul 11 00:05:32.283370 sshd[6121]: Accepted publickey for core from 10.0.0.1 port 46496 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:32.286240 sshd[6121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.249 [WARNING][6113] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" WorkloadEndpoint="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.249 [INFO][6113] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.249 [INFO][6113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" iface="eth0" netns="" Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.249 [INFO][6113] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.249 [INFO][6113] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.272 [INFO][6124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" HandleID="k8s-pod-network.f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Workload="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.272 [INFO][6124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.272 [INFO][6124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.281 [WARNING][6124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" HandleID="k8s-pod-network.f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Workload="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.281 [INFO][6124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" HandleID="k8s-pod-network.f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Workload="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.283 [INFO][6124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.288018 containerd[1432]: 2025-07-11 00:05:32.285 [INFO][6113] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:32.289629 containerd[1432]: time="2025-07-11T00:05:32.288349425Z" level=info msg="TearDown network for sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\" successfully" Jul 11 00:05:32.289629 containerd[1432]: time="2025-07-11T00:05:32.288955519Z" level=info msg="StopPodSandbox for \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\" returns successfully" Jul 11 00:05:32.291408 containerd[1432]: time="2025-07-11T00:05:32.291110249Z" level=info msg="RemovePodSandbox for \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\"" Jul 11 00:05:32.291408 containerd[1432]: time="2025-07-11T00:05:32.291144449Z" level=info msg="Forcibly stopping sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\"" Jul 11 00:05:32.291802 systemd-logind[1416]: New session 13 of user core. Jul 11 00:05:32.305163 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.327 [WARNING][6143] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" WorkloadEndpoint="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.327 [INFO][6143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.327 [INFO][6143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" iface="eth0" netns="" Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.327 [INFO][6143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.327 [INFO][6143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.348 [INFO][6153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" HandleID="k8s-pod-network.f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Workload="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.348 [INFO][6153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.348 [INFO][6153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.359 [WARNING][6153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" HandleID="k8s-pod-network.f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Workload="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.359 [INFO][6153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" HandleID="k8s-pod-network.f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Workload="localhost-k8s-whisker--64fb989db4--7nm8p-eth0" Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.366 [INFO][6153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.370891 containerd[1432]: 2025-07-11 00:05:32.369 [INFO][6143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198" Jul 11 00:05:32.370891 containerd[1432]: time="2025-07-11T00:05:32.370596628Z" level=info msg="TearDown network for sandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\" successfully" Jul 11 00:05:32.381572 containerd[1432]: time="2025-07-11T00:05:32.381521998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:05:32.381688 containerd[1432]: time="2025-07-11T00:05:32.381600599Z" level=info msg="RemovePodSandbox \"f022a89c1b857c440aa48d676350029f076146ffdd4f1def07d41add2fc3a198\" returns successfully" Jul 11 00:05:32.382447 containerd[1432]: time="2025-07-11T00:05:32.382121291Z" level=info msg="StopPodSandbox for \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\"" Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.416 [WARNING][6179] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0", GenerateName:"calico-apiserver-567bf94b46-", Namespace:"calico-apiserver", SelfLink:"", UID:"d36dbd41-8dff-4eb6-a8f2-69b019debb74", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567bf94b46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6", Pod:"calico-apiserver-567bf94b46-mxmnt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70533c489a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.416 [INFO][6179] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.416 [INFO][6179] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" iface="eth0" netns="" Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.416 [INFO][6179] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.416 [INFO][6179] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.440 [INFO][6187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" HandleID="k8s-pod-network.9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.440 [INFO][6187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.440 [INFO][6187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.448 [WARNING][6187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" HandleID="k8s-pod-network.9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.448 [INFO][6187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" HandleID="k8s-pod-network.9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.450 [INFO][6187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.453997 containerd[1432]: 2025-07-11 00:05:32.451 [INFO][6179] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:32.453997 containerd[1432]: time="2025-07-11T00:05:32.453979696Z" level=info msg="TearDown network for sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\" successfully" Jul 11 00:05:32.455343 containerd[1432]: time="2025-07-11T00:05:32.454005056Z" level=info msg="StopPodSandbox for \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\" returns successfully" Jul 11 00:05:32.455343 containerd[1432]: time="2025-07-11T00:05:32.454772394Z" level=info msg="RemovePodSandbox for \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\"" Jul 11 00:05:32.455343 containerd[1432]: time="2025-07-11T00:05:32.454835755Z" level=info msg="Forcibly stopping sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\"" Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.495 [WARNING][6206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0", GenerateName:"calico-apiserver-567bf94b46-", Namespace:"calico-apiserver", SelfLink:"", UID:"d36dbd41-8dff-4eb6-a8f2-69b019debb74", ResourceVersion:"1149", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"567bf94b46", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"060c472c6a00f5c2bc1a082ac15353b4ad956ef298b8e9adce622ab340e6eea6", Pod:"calico-apiserver-567bf94b46-mxmnt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70533c489a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.496 [INFO][6206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.496 [INFO][6206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" iface="eth0" netns="" Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.496 [INFO][6206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.496 [INFO][6206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.517 [INFO][6215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" HandleID="k8s-pod-network.9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.518 [INFO][6215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.518 [INFO][6215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.526 [WARNING][6215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" HandleID="k8s-pod-network.9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.526 [INFO][6215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" HandleID="k8s-pod-network.9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Workload="localhost-k8s-calico--apiserver--567bf94b46--mxmnt-eth0" Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.528 [INFO][6215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.533888 containerd[1432]: 2025-07-11 00:05:32.531 [INFO][6206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2" Jul 11 00:05:32.533888 containerd[1432]: time="2025-07-11T00:05:32.533619638Z" level=info msg="TearDown network for sandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\" successfully" Jul 11 00:05:32.538456 containerd[1432]: time="2025-07-11T00:05:32.538385387Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:05:32.538563 containerd[1432]: time="2025-07-11T00:05:32.538509950Z" level=info msg="RemovePodSandbox \"9a1787a904869c237ee3005676c38fdcbfea0f0949e32c2251821bdb87bd06b2\" returns successfully" Jul 11 00:05:32.539341 containerd[1432]: time="2025-07-11T00:05:32.539099764Z" level=info msg="StopPodSandbox for \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\"" Jul 11 00:05:32.593065 sshd[6121]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:32.598075 systemd[1]: sshd@12-10.0.0.27:22-10.0.0.1:46496.service: Deactivated successfully. Jul 11 00:05:32.599725 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 00:05:32.600731 systemd-logind[1416]: Session 13 logged out. Waiting for processes to exit. Jul 11 00:05:32.601770 systemd-logind[1416]: Removed session 13. Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.586 [WARNING][6233] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8t882-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69b81cc5-c8b0-45b3-aeb9-88292bebdc48", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1", Pod:"csi-node-driver-8t882", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid806a331429", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.586 [INFO][6233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.586 [INFO][6233] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" iface="eth0" netns="" Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.586 [INFO][6233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.586 [INFO][6233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.607 [INFO][6241] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" HandleID="k8s-pod-network.fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.608 [INFO][6241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.608 [INFO][6241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.619 [WARNING][6241] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" HandleID="k8s-pod-network.fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.619 [INFO][6241] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" HandleID="k8s-pod-network.fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.622 [INFO][6241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.627046 containerd[1432]: 2025-07-11 00:05:32.624 [INFO][6233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:32.627046 containerd[1432]: time="2025-07-11T00:05:32.626880292Z" level=info msg="TearDown network for sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\" successfully" Jul 11 00:05:32.627046 containerd[1432]: time="2025-07-11T00:05:32.626906933Z" level=info msg="StopPodSandbox for \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\" returns successfully" Jul 11 00:05:32.627637 containerd[1432]: time="2025-07-11T00:05:32.627335463Z" level=info msg="RemovePodSandbox for \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\"" Jul 11 00:05:32.627637 containerd[1432]: time="2025-07-11T00:05:32.627367184Z" level=info msg="Forcibly stopping sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\"" Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.667 [WARNING][6261] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8t882-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69b81cc5-c8b0-45b3-aeb9-88292bebdc48", ResourceVersion:"1239", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb5d9e593656ff7f3db233065d3d052807c4d0260a356d7dc8039d3ab8fa61a1", Pod:"csi-node-driver-8t882", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid806a331429", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.667 [INFO][6261] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.667 [INFO][6261] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" iface="eth0" netns="" Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.667 [INFO][6261] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.668 [INFO][6261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.687 [INFO][6270] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" HandleID="k8s-pod-network.fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.688 [INFO][6270] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.688 [INFO][6270] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.698 [WARNING][6270] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" HandleID="k8s-pod-network.fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.698 [INFO][6270] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" HandleID="k8s-pod-network.fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Workload="localhost-k8s-csi--node--driver--8t882-eth0" Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.700 [INFO][6270] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.705132 containerd[1432]: 2025-07-11 00:05:32.703 [INFO][6261] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2" Jul 11 00:05:32.705132 containerd[1432]: time="2025-07-11T00:05:32.705095482Z" level=info msg="TearDown network for sandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\" successfully" Jul 11 00:05:32.709626 containerd[1432]: time="2025-07-11T00:05:32.709580585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:05:32.709724 containerd[1432]: time="2025-07-11T00:05:32.709659427Z" level=info msg="RemovePodSandbox \"fe7ecaba173426cc87156f865cae2a8f436744008718498498a4e846b0906cf2\" returns successfully" Jul 11 00:05:32.710226 containerd[1432]: time="2025-07-11T00:05:32.710195679Z" level=info msg="StopPodSandbox for \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\"" Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.747 [WARNING][6288] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9", Pod:"coredns-7c65d6cfc9-t59gl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11b7f30a3a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.747 [INFO][6288] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.747 [INFO][6288] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" iface="eth0" netns="" Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.747 [INFO][6288] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.747 [INFO][6288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.773 [INFO][6296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" HandleID="k8s-pod-network.9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.773 [INFO][6296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.773 [INFO][6296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.786 [WARNING][6296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" HandleID="k8s-pod-network.9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.786 [INFO][6296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" HandleID="k8s-pod-network.9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.790 [INFO][6296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.798656 containerd[1432]: 2025-07-11 00:05:32.795 [INFO][6288] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:32.799278 containerd[1432]: time="2025-07-11T00:05:32.798693824Z" level=info msg="TearDown network for sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\" successfully" Jul 11 00:05:32.799278 containerd[1432]: time="2025-07-11T00:05:32.798719185Z" level=info msg="StopPodSandbox for \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\" returns successfully" Jul 11 00:05:32.799892 containerd[1432]: time="2025-07-11T00:05:32.799512843Z" level=info msg="RemovePodSandbox for \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\"" Jul 11 00:05:32.799892 containerd[1432]: time="2025-07-11T00:05:32.799550564Z" level=info msg="Forcibly stopping sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\"" Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.848 [WARNING][6330] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"2ad5e86d-f0d2-48a9-bf45-69b8bfcd24a6", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8760ace805e1a6d389fec822f576a9345c54d281e6624592a71766781ad09f9", Pod:"coredns-7c65d6cfc9-t59gl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11b7f30a3a9", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.849 [INFO][6330] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.849 [INFO][6330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" iface="eth0" netns="" Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.849 [INFO][6330] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.849 [INFO][6330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.874 [INFO][6344] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" HandleID="k8s-pod-network.9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.875 [INFO][6344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.875 [INFO][6344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.886 [WARNING][6344] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" HandleID="k8s-pod-network.9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.886 [INFO][6344] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" HandleID="k8s-pod-network.9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Workload="localhost-k8s-coredns--7c65d6cfc9--t59gl-eth0" Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.887 [INFO][6344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.891344 containerd[1432]: 2025-07-11 00:05:32.889 [INFO][6330] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159" Jul 11 00:05:32.891982 containerd[1432]: time="2025-07-11T00:05:32.891379865Z" level=info msg="TearDown network for sandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\" successfully" Jul 11 00:05:32.894217 containerd[1432]: time="2025-07-11T00:05:32.894183089Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:05:32.894280 containerd[1432]: time="2025-07-11T00:05:32.894247691Z" level=info msg="RemovePodSandbox \"9aad650bb802825bf87c884b2b7260211b01492d7c214203103925c465c7f159\" returns successfully" Jul 11 00:05:32.894785 containerd[1432]: time="2025-07-11T00:05:32.894761343Z" level=info msg="StopPodSandbox for \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\"" Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.927 [WARNING][6363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0", GenerateName:"calico-apiserver-bcf45dd9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e57c9b88-19af-4e41-ab80-5de76c7ad975", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcf45dd9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491", Pod:"calico-apiserver-bcf45dd9c-fb758", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliebb2979b059", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.927 [INFO][6363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.927 [INFO][6363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" iface="eth0" netns="" Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.927 [INFO][6363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.927 [INFO][6363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.946 [INFO][6372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" HandleID="k8s-pod-network.5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.946 [INFO][6372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.946 [INFO][6372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.954 [WARNING][6372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" HandleID="k8s-pod-network.5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.955 [INFO][6372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" HandleID="k8s-pod-network.5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.956 [INFO][6372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:32.959947 containerd[1432]: 2025-07-11 00:05:32.958 [INFO][6363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:32.959947 containerd[1432]: time="2025-07-11T00:05:32.959920434Z" level=info msg="TearDown network for sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\" successfully" Jul 11 00:05:32.959947 containerd[1432]: time="2025-07-11T00:05:32.959944554Z" level=info msg="StopPodSandbox for \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\" returns successfully" Jul 11 00:05:32.960442 containerd[1432]: time="2025-07-11T00:05:32.960361404Z" level=info msg="RemovePodSandbox for \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\"" Jul 11 00:05:32.960442 containerd[1432]: time="2025-07-11T00:05:32.960389004Z" level=info msg="Forcibly stopping sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\"" Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:32.993 [WARNING][6389] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0", GenerateName:"calico-apiserver-bcf45dd9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"e57c9b88-19af-4e41-ab80-5de76c7ad975", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bcf45dd9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491", Pod:"calico-apiserver-bcf45dd9c-fb758", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliebb2979b059", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:32.993 [INFO][6389] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:32.993 [INFO][6389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" iface="eth0" netns="" Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:32.993 [INFO][6389] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:32.993 [INFO][6389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:33.012 [INFO][6397] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" HandleID="k8s-pod-network.5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:33.012 [INFO][6397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:33.012 [INFO][6397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:33.020 [WARNING][6397] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" HandleID="k8s-pod-network.5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:33.020 [INFO][6397] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" HandleID="k8s-pod-network.5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:33.021 [INFO][6397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:33.025004 containerd[1432]: 2025-07-11 00:05:33.023 [INFO][6389] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf" Jul 11 00:05:33.025396 containerd[1432]: time="2025-07-11T00:05:33.025044478Z" level=info msg="TearDown network for sandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\" successfully" Jul 11 00:05:33.027820 containerd[1432]: time="2025-07-11T00:05:33.027788100Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:05:33.027931 containerd[1432]: time="2025-07-11T00:05:33.027865782Z" level=info msg="RemovePodSandbox \"5515ccb35f76f9ab76698f60b94dd9ebab9e7dd14c5bec60126a952bee80baaf\" returns successfully" Jul 11 00:05:33.028394 containerd[1432]: time="2025-07-11T00:05:33.028367873Z" level=info msg="StopPodSandbox for \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\"" Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.062 [WARNING][6414] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"78d5efbc-6d02-414b-bb0f-acc8122968c7", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5", Pod:"coredns-7c65d6cfc9-fj8dx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e6f9e96cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.063 [INFO][6414] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.063 [INFO][6414] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" iface="eth0" netns="" Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.063 [INFO][6414] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.063 [INFO][6414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.081 [INFO][6423] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" HandleID="k8s-pod-network.1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.081 [INFO][6423] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.081 [INFO][6423] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.089 [WARNING][6423] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" HandleID="k8s-pod-network.1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.089 [INFO][6423] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" HandleID="k8s-pod-network.1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.091 [INFO][6423] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:33.094514 containerd[1432]: 2025-07-11 00:05:33.093 [INFO][6414] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:33.094943 containerd[1432]: time="2025-07-11T00:05:33.094550212Z" level=info msg="TearDown network for sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\" successfully" Jul 11 00:05:33.094943 containerd[1432]: time="2025-07-11T00:05:33.094576013Z" level=info msg="StopPodSandbox for \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\" returns successfully" Jul 11 00:05:33.095333 containerd[1432]: time="2025-07-11T00:05:33.095300949Z" level=info msg="RemovePodSandbox for \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\"" Jul 11 00:05:33.095394 containerd[1432]: time="2025-07-11T00:05:33.095357870Z" level=info msg="Forcibly stopping sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\"" Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.128 [WARNING][6441] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"78d5efbc-6d02-414b-bb0f-acc8122968c7", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 0, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bde9eb0e612e6953e94ef870ab6e10ee904e2a178dae6b6b3187edda92f0a8d5", Pod:"coredns-7c65d6cfc9-fj8dx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e6f9e96cfa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.129 [INFO][6441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.129 [INFO][6441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" iface="eth0" netns="" Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.129 [INFO][6441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.129 [INFO][6441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.146 [INFO][6450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" HandleID="k8s-pod-network.1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.146 [INFO][6450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.146 [INFO][6450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.154 [WARNING][6450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" HandleID="k8s-pod-network.1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.154 [INFO][6450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" HandleID="k8s-pod-network.1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Workload="localhost-k8s-coredns--7c65d6cfc9--fj8dx-eth0" Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.156 [INFO][6450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:33.159763 containerd[1432]: 2025-07-11 00:05:33.158 [INFO][6441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b" Jul 11 00:05:33.160194 containerd[1432]: time="2025-07-11T00:05:33.159803810Z" level=info msg="TearDown network for sandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\" successfully" Jul 11 00:05:33.165789 containerd[1432]: time="2025-07-11T00:05:33.165744824Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:05:33.165861 containerd[1432]: time="2025-07-11T00:05:33.165818266Z" level=info msg="RemovePodSandbox \"1e01958dbfa59b9b8e328e51c167ea13112e4b9725d2b6b4bf8782f5699c768b\" returns successfully" Jul 11 00:05:33.166627 containerd[1432]: time="2025-07-11T00:05:33.166359718Z" level=info msg="StopPodSandbox for \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\"" Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.196 [WARNING][6468] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.197 [INFO][6468] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.197 [INFO][6468] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" iface="eth0" netns="" Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.197 [INFO][6468] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.197 [INFO][6468] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.217 [INFO][6476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.218 [INFO][6476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.218 [INFO][6476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.227 [WARNING][6476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.227 [INFO][6476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.228 [INFO][6476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:33.232184 containerd[1432]: 2025-07-11 00:05:33.230 [INFO][6468] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:33.232184 containerd[1432]: time="2025-07-11T00:05:33.232150328Z" level=info msg="TearDown network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\" successfully" Jul 11 00:05:33.232184 containerd[1432]: time="2025-07-11T00:05:33.232175888Z" level=info msg="StopPodSandbox for \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\" returns successfully" Jul 11 00:05:33.233818 containerd[1432]: time="2025-07-11T00:05:33.233786885Z" level=info msg="RemovePodSandbox for \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\"" Jul 11 00:05:33.234421 containerd[1432]: time="2025-07-11T00:05:33.234061211Z" level=info msg="Forcibly stopping sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\"" Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.265 [WARNING][6494] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.265 [INFO][6494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.265 [INFO][6494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" iface="eth0" netns="" Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.265 [INFO][6494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.265 [INFO][6494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.282 [INFO][6502] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.282 [INFO][6502] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.282 [INFO][6502] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.290 [WARNING][6502] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.290 [INFO][6502] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" HandleID="k8s-pod-network.9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.294 [INFO][6502] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:33.297717 containerd[1432]: 2025-07-11 00:05:33.296 [INFO][6494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3" Jul 11 00:05:33.298871 containerd[1432]: time="2025-07-11T00:05:33.298158383Z" level=info msg="TearDown network for sandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\" successfully" Jul 11 00:05:33.301188 containerd[1432]: time="2025-07-11T00:05:33.301149690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:05:33.301238 containerd[1432]: time="2025-07-11T00:05:33.301219612Z" level=info msg="RemovePodSandbox \"9d31b661825988547235173ff329a52d8121cf16255c45fa6d46b3d64a9020d3\" returns successfully" Jul 11 00:05:33.302074 containerd[1432]: time="2025-07-11T00:05:33.301712103Z" level=info msg="StopPodSandbox for \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\"" Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.333 [WARNING][6519] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.333 [INFO][6519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.333 [INFO][6519] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" iface="eth0" netns="" Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.334 [INFO][6519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.334 [INFO][6519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.353 [INFO][6527] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.353 [INFO][6527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.353 [INFO][6527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.361 [WARNING][6527] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.361 [INFO][6527] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.362 [INFO][6527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:33.365988 containerd[1432]: 2025-07-11 00:05:33.364 [INFO][6519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:33.366336 containerd[1432]: time="2025-07-11T00:05:33.366020519Z" level=info msg="TearDown network for sandbox \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\" successfully" Jul 11 00:05:33.366336 containerd[1432]: time="2025-07-11T00:05:33.366045680Z" level=info msg="StopPodSandbox for \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\" returns successfully" Jul 11 00:05:33.366857 containerd[1432]: time="2025-07-11T00:05:33.366558891Z" level=info msg="RemovePodSandbox for \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\"" Jul 11 00:05:33.366857 containerd[1432]: time="2025-07-11T00:05:33.366592252Z" level=info msg="Forcibly stopping sandbox \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\"" Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.398 [WARNING][6544] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" WorkloadEndpoint="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.399 [INFO][6544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.399 [INFO][6544] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" iface="eth0" netns="" Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.399 [INFO][6544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.399 [INFO][6544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.417 [INFO][6553] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.417 [INFO][6553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.417 [INFO][6553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.427 [WARNING][6553] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.427 [INFO][6553] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" HandleID="k8s-pod-network.bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--dssv7-eth0" Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.428 [INFO][6553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:33.434246 containerd[1432]: 2025-07-11 00:05:33.432 [INFO][6544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df" Jul 11 00:05:33.434701 containerd[1432]: time="2025-07-11T00:05:33.434672114Z" level=info msg="TearDown network for sandbox \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\" successfully" Jul 11 00:05:33.437529 containerd[1432]: time="2025-07-11T00:05:33.437498818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 11 00:05:33.437739 containerd[1432]: time="2025-07-11T00:05:33.437641461Z" level=info msg="RemovePodSandbox \"bd490984cb4122d58fe65b9243698cd36ddb22ea46d5b0c82e88fa93ba1147df\" returns successfully" Jul 11 00:05:35.696677 kubelet[2472]: I0711 00:05:35.696636 2472 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 00:05:35.740667 containerd[1432]: time="2025-07-11T00:05:35.740602140Z" level=info msg="StopContainer for \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\" with timeout 30 (s)" Jul 11 00:05:35.741857 containerd[1432]: time="2025-07-11T00:05:35.741144872Z" level=info msg="Stop container \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\" with signal terminated" Jul 11 00:05:35.766452 systemd[1]: cri-containerd-b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6.scope: Deactivated successfully. Jul 11 00:05:35.766916 systemd[1]: cri-containerd-b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6.scope: Consumed 1.488s CPU time. Jul 11 00:05:35.793291 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6-rootfs.mount: Deactivated successfully. Jul 11 00:05:35.794350 containerd[1432]: time="2025-07-11T00:05:35.794281092Z" level=info msg="shim disconnected" id=b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6 namespace=k8s.io Jul 11 00:05:35.794350 containerd[1432]: time="2025-07-11T00:05:35.794345094Z" level=warning msg="cleaning up after shim disconnected" id=b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6 namespace=k8s.io Jul 11 00:05:35.794439 containerd[1432]: time="2025-07-11T00:05:35.794354374Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:05:35.820348 containerd[1432]: time="2025-07-11T00:05:35.820294110Z" level=info msg="StopContainer for \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\" returns successfully" Jul 11 00:05:35.820960 containerd[1432]: time="2025-07-11T00:05:35.820912004Z" level=info msg="StopPodSandbox for \"c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491\"" Jul 11 00:05:35.821008 containerd[1432]: time="2025-07-11T00:05:35.820961045Z" level=info msg="Container to stop \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 11 00:05:35.823514 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491-shm.mount: Deactivated successfully. Jul 11 00:05:35.829480 systemd[1]: cri-containerd-c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491.scope: Deactivated successfully. Jul 11 00:05:35.849116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491-rootfs.mount: Deactivated successfully. Jul 11 00:05:35.850018 containerd[1432]: time="2025-07-11T00:05:35.849959089Z" level=info msg="shim disconnected" id=c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491 namespace=k8s.io Jul 11 00:05:35.850094 containerd[1432]: time="2025-07-11T00:05:35.850019770Z" level=warning msg="cleaning up after shim disconnected" id=c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491 namespace=k8s.io Jul 11 00:05:35.850094 containerd[1432]: time="2025-07-11T00:05:35.850029770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 11 00:05:35.910090 systemd-networkd[1371]: caliebb2979b059: Link DOWN Jul 11 00:05:35.910099 systemd-networkd[1371]: caliebb2979b059: Lost carrier Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.907 [INFO][6661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.908 [INFO][6661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" iface="eth0" netns="/var/run/netns/cni-2be9dded-340f-9caf-bb44-7090835e21a2" Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.908 [INFO][6661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" iface="eth0" netns="/var/run/netns/cni-2be9dded-340f-9caf-bb44-7090835e21a2" Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.922 [INFO][6661] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" after=13.543421ms iface="eth0" netns="/var/run/netns/cni-2be9dded-340f-9caf-bb44-7090835e21a2" Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.922 [INFO][6661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.922 [INFO][6661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.942 [INFO][6678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" HandleID="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.942 [INFO][6678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.942 [INFO][6678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.977 [INFO][6678] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" HandleID="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.977 [INFO][6678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" HandleID="k8s-pod-network.c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Workload="localhost-k8s-calico--apiserver--bcf45dd9c--fb758-eth0" Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.979 [INFO][6678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 00:05:35.985017 containerd[1432]: 2025-07-11 00:05:35.982 [INFO][6661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491" Jul 11 00:05:35.985963 containerd[1432]: time="2025-07-11T00:05:35.985177651Z" level=info msg="TearDown network for sandbox \"c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491\" successfully" Jul 11 00:05:35.985963 containerd[1432]: time="2025-07-11T00:05:35.985246133Z" level=info msg="StopPodSandbox for \"c0f0199c95e452e3fa7965822fe3d793d50b282c5ab2ff319cf2fa09b56e0491\" returns successfully" Jul 11 00:05:35.987666 systemd[1]: run-netns-cni\x2d2be9dded\x2d340f\x2d9caf\x2dbb44\x2d7090835e21a2.mount: Deactivated successfully. Jul 11 00:05:36.157275 kubelet[2472]: I0711 00:05:36.156829 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfhmj\" (UniqueName: \"kubernetes.io/projected/e57c9b88-19af-4e41-ab80-5de76c7ad975-kube-api-access-wfhmj\") pod \"e57c9b88-19af-4e41-ab80-5de76c7ad975\" (UID: \"e57c9b88-19af-4e41-ab80-5de76c7ad975\") " Jul 11 00:05:36.157275 kubelet[2472]: I0711 00:05:36.156911 2472 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e57c9b88-19af-4e41-ab80-5de76c7ad975-calico-apiserver-certs\") pod \"e57c9b88-19af-4e41-ab80-5de76c7ad975\" (UID: \"e57c9b88-19af-4e41-ab80-5de76c7ad975\") " Jul 11 00:05:36.162157 kubelet[2472]: I0711 00:05:36.162079 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57c9b88-19af-4e41-ab80-5de76c7ad975-kube-api-access-wfhmj" (OuterVolumeSpecName: "kube-api-access-wfhmj") pod "e57c9b88-19af-4e41-ab80-5de76c7ad975" (UID: "e57c9b88-19af-4e41-ab80-5de76c7ad975"). InnerVolumeSpecName "kube-api-access-wfhmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 11 00:05:36.162348 kubelet[2472]: I0711 00:05:36.162323 2472 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e57c9b88-19af-4e41-ab80-5de76c7ad975-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "e57c9b88-19af-4e41-ab80-5de76c7ad975" (UID: "e57c9b88-19af-4e41-ab80-5de76c7ad975"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 11 00:05:36.163688 systemd[1]: var-lib-kubelet-pods-e57c9b88\x2d19af\x2d4e41\x2dab80\x2d5de76c7ad975-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwfhmj.mount: Deactivated successfully. Jul 11 00:05:36.163791 systemd[1]: var-lib-kubelet-pods-e57c9b88\x2d19af\x2d4e41\x2dab80\x2d5de76c7ad975-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 11 00:05:36.231089 kubelet[2472]: I0711 00:05:36.230918 2472 scope.go:117] "RemoveContainer" containerID="b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6" Jul 11 00:05:36.233733 containerd[1432]: time="2025-07-11T00:05:36.233678164Z" level=info msg="RemoveContainer for \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\"" Jul 11 00:05:36.236309 systemd[1]: Removed slice kubepods-besteffort-pode57c9b88_19af_4e41_ab80_5de76c7ad975.slice - libcontainer container kubepods-besteffort-pode57c9b88_19af_4e41_ab80_5de76c7ad975.slice. Jul 11 00:05:36.236410 systemd[1]: kubepods-besteffort-pode57c9b88_19af_4e41_ab80_5de76c7ad975.slice: Consumed 1.504s CPU time. Jul 11 00:05:36.238830 containerd[1432]: time="2025-07-11T00:05:36.238777876Z" level=info msg="RemoveContainer for \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\" returns successfully" Jul 11 00:05:36.239083 kubelet[2472]: I0711 00:05:36.239047 2472 scope.go:117] "RemoveContainer" containerID="b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6" Jul 11 00:05:36.240565 containerd[1432]: time="2025-07-11T00:05:36.240516114Z" level=error msg="ContainerStatus for \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\": not found" Jul 11 00:05:36.241276 kubelet[2472]: E0711 00:05:36.241239 2472 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\": not found" containerID="b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6" Jul 11 00:05:36.241417 kubelet[2472]: I0711 00:05:36.241284 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6"} err="failed to get container status \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b33cc58b00594f644366a03128eadae98c7990e3a9ff040555dfab30d5e784f6\": not found" Jul 11 00:05:36.258008 kubelet[2472]: I0711 00:05:36.257961 2472 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfhmj\" (UniqueName: \"kubernetes.io/projected/e57c9b88-19af-4e41-ab80-5de76c7ad975-kube-api-access-wfhmj\") on node \"localhost\" DevicePath \"\"" Jul 11 00:05:36.258185 kubelet[2472]: I0711 00:05:36.258134 2472 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e57c9b88-19af-4e41-ab80-5de76c7ad975-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 11 00:05:37.606767 systemd[1]: Started sshd@13-10.0.0.27:22-10.0.0.1:39888.service - OpenSSH per-connection server daemon (10.0.0.1:39888). Jul 11 00:05:37.641076 sshd[6692]: Accepted publickey for core from 10.0.0.1 port 39888 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:37.642298 sshd[6692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:37.645915 systemd-logind[1416]: New session 14 of user core. Jul 11 00:05:37.659012 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 00:05:37.781156 sshd[6692]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:37.784726 systemd[1]: sshd@13-10.0.0.27:22-10.0.0.1:39888.service: Deactivated successfully. Jul 11 00:05:37.786873 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 00:05:37.787664 systemd-logind[1416]: Session 14 logged out. Waiting for processes to exit. Jul 11 00:05:37.788630 systemd-logind[1416]: Removed session 14. Jul 11 00:05:37.884920 kubelet[2472]: I0711 00:05:37.884792 2472 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57c9b88-19af-4e41-ab80-5de76c7ad975" path="/var/lib/kubelet/pods/e57c9b88-19af-4e41-ab80-5de76c7ad975/volumes" Jul 11 00:05:42.791864 systemd[1]: Started sshd@14-10.0.0.27:22-10.0.0.1:42196.service - OpenSSH per-connection server daemon (10.0.0.1:42196). Jul 11 00:05:42.829132 sshd[6710]: Accepted publickey for core from 10.0.0.1 port 42196 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:42.830766 sshd[6710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:42.835600 systemd-logind[1416]: New session 15 of user core. Jul 11 00:05:42.856083 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 00:05:42.975067 sshd[6710]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:42.978567 systemd[1]: sshd@14-10.0.0.27:22-10.0.0.1:42196.service: Deactivated successfully. Jul 11 00:05:42.981441 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 00:05:42.983542 systemd-logind[1416]: Session 15 logged out. Waiting for processes to exit. Jul 11 00:05:42.984418 systemd-logind[1416]: Removed session 15. Jul 11 00:05:43.882318 kubelet[2472]: E0711 00:05:43.882276 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:05:47.993129 systemd[1]: Started sshd@15-10.0.0.27:22-10.0.0.1:42210.service - OpenSSH per-connection server daemon (10.0.0.1:42210). Jul 11 00:05:48.023972 sshd[6733]: Accepted publickey for core from 10.0.0.1 port 42210 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:48.025445 sshd[6733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:48.029676 systemd-logind[1416]: New session 16 of user core. Jul 11 00:05:48.035049 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 00:05:48.168954 sshd[6733]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:48.177524 systemd[1]: sshd@15-10.0.0.27:22-10.0.0.1:42210.service: Deactivated successfully. Jul 11 00:05:48.180568 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 00:05:48.182400 systemd-logind[1416]: Session 16 logged out. Waiting for processes to exit. Jul 11 00:05:48.198178 systemd[1]: Started sshd@16-10.0.0.27:22-10.0.0.1:42218.service - OpenSSH per-connection server daemon (10.0.0.1:42218). Jul 11 00:05:48.200188 systemd-logind[1416]: Removed session 16. Jul 11 00:05:48.228561 sshd[6747]: Accepted publickey for core from 10.0.0.1 port 42218 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:48.230103 sshd[6747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:48.237459 systemd-logind[1416]: New session 17 of user core. Jul 11 00:05:48.248127 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 00:05:48.519069 sshd[6747]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:48.529629 systemd[1]: sshd@16-10.0.0.27:22-10.0.0.1:42218.service: Deactivated successfully. Jul 11 00:05:48.531691 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 00:05:48.534243 systemd-logind[1416]: Session 17 logged out. Waiting for processes to exit. Jul 11 00:05:48.535370 systemd[1]: Started sshd@17-10.0.0.27:22-10.0.0.1:42222.service - OpenSSH per-connection server daemon (10.0.0.1:42222). Jul 11 00:05:48.536788 systemd-logind[1416]: Removed session 17. Jul 11 00:05:48.573174 sshd[6759]: Accepted publickey for core from 10.0.0.1 port 42222 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:48.574491 sshd[6759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:48.581472 systemd-logind[1416]: New session 18 of user core. Jul 11 00:05:48.590033 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 00:05:50.466285 sshd[6759]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:50.476650 systemd[1]: sshd@17-10.0.0.27:22-10.0.0.1:42222.service: Deactivated successfully. Jul 11 00:05:50.480662 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 00:05:50.482535 systemd-logind[1416]: Session 18 logged out. Waiting for processes to exit. Jul 11 00:05:50.493317 systemd[1]: Started sshd@18-10.0.0.27:22-10.0.0.1:42228.service - OpenSSH per-connection server daemon (10.0.0.1:42228). Jul 11 00:05:50.495955 systemd-logind[1416]: Removed session 18. Jul 11 00:05:50.536370 sshd[6782]: Accepted publickey for core from 10.0.0.1 port 42228 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:50.537965 sshd[6782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:50.543439 systemd-logind[1416]: New session 19 of user core. Jul 11 00:05:50.550041 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 00:05:51.107091 sshd[6782]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:51.117487 systemd[1]: sshd@18-10.0.0.27:22-10.0.0.1:42228.service: Deactivated successfully. Jul 11 00:05:51.119335 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 00:05:51.121604 systemd-logind[1416]: Session 19 logged out. Waiting for processes to exit. Jul 11 00:05:51.136280 systemd[1]: Started sshd@19-10.0.0.27:22-10.0.0.1:42242.service - OpenSSH per-connection server daemon (10.0.0.1:42242). Jul 11 00:05:51.138219 systemd-logind[1416]: Removed session 19. Jul 11 00:05:51.191893 sshd[6794]: Accepted publickey for core from 10.0.0.1 port 42242 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:51.192719 sshd[6794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:51.201159 systemd-logind[1416]: New session 20 of user core. Jul 11 00:05:51.205018 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 00:05:51.332465 sshd[6794]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:51.335441 systemd[1]: sshd@19-10.0.0.27:22-10.0.0.1:42242.service: Deactivated successfully. Jul 11 00:05:51.337425 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 00:05:51.338804 systemd-logind[1416]: Session 20 logged out. Waiting for processes to exit. Jul 11 00:05:51.340362 systemd-logind[1416]: Removed session 20. Jul 11 00:05:56.352281 systemd[1]: Started sshd@20-10.0.0.27:22-10.0.0.1:44236.service - OpenSSH per-connection server daemon (10.0.0.1:44236). Jul 11 00:05:56.394199 sshd[6836]: Accepted publickey for core from 10.0.0.1 port 44236 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:05:56.395516 sshd[6836]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:05:56.400532 systemd-logind[1416]: New session 21 of user core. Jul 11 00:05:56.409185 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 11 00:05:56.554534 sshd[6836]: pam_unix(sshd:session): session closed for user core Jul 11 00:05:56.562287 systemd[1]: sshd@20-10.0.0.27:22-10.0.0.1:44236.service: Deactivated successfully. Jul 11 00:05:56.566233 systemd[1]: session-21.scope: Deactivated successfully. Jul 11 00:05:56.570178 systemd-logind[1416]: Session 21 logged out. Waiting for processes to exit. Jul 11 00:05:56.572894 systemd-logind[1416]: Removed session 21. Jul 11 00:06:01.570434 systemd[1]: Started sshd@21-10.0.0.27:22-10.0.0.1:44250.service - OpenSSH per-connection server daemon (10.0.0.1:44250). Jul 11 00:06:01.608751 sshd[6870]: Accepted publickey for core from 10.0.0.1 port 44250 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:06:01.611144 sshd[6870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:06:01.619994 systemd-logind[1416]: New session 22 of user core. Jul 11 00:06:01.624047 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 11 00:06:01.761392 sshd[6870]: pam_unix(sshd:session): session closed for user core Jul 11 00:06:01.766075 systemd[1]: sshd@21-10.0.0.27:22-10.0.0.1:44250.service: Deactivated successfully. Jul 11 00:06:01.769629 systemd[1]: session-22.scope: Deactivated successfully. Jul 11 00:06:01.771374 systemd-logind[1416]: Session 22 logged out. Waiting for processes to exit. Jul 11 00:06:01.772301 systemd-logind[1416]: Removed session 22. Jul 11 00:06:03.882865 kubelet[2472]: E0711 00:06:03.882480 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:06:03.882865 kubelet[2472]: E0711 00:06:03.882754 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 11 00:06:06.775106 systemd[1]: Started sshd@22-10.0.0.27:22-10.0.0.1:35262.service - OpenSSH per-connection server daemon (10.0.0.1:35262). Jul 11 00:06:06.815831 sshd[6905]: Accepted publickey for core from 10.0.0.1 port 35262 ssh2: RSA SHA256:GK2LEBRiSxxQSb7NJczWsRz9vp5Z0addujXbSKx/c/M Jul 11 00:06:06.818513 sshd[6905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 00:06:06.823562 systemd-logind[1416]: New session 23 of user core. Jul 11 00:06:06.838316 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 11 00:06:06.979903 sshd[6905]: pam_unix(sshd:session): session closed for user core Jul 11 00:06:06.982785 systemd[1]: sshd@22-10.0.0.27:22-10.0.0.1:35262.service: Deactivated successfully. Jul 11 00:06:06.985640 systemd[1]: session-23.scope: Deactivated successfully. Jul 11 00:06:06.989111 systemd-logind[1416]: Session 23 logged out. Waiting for processes to exit. Jul 11 00:06:06.990906 systemd-logind[1416]: Removed session 23.