Jul 14 21:49:54.971838 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 14 21:49:54.971860 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Jul 14 20:26:44 -00 2025 Jul 14 21:49:54.971870 kernel: KASLR enabled Jul 14 21:49:54.971876 kernel: efi: EFI v2.7 by EDK II Jul 14 21:49:54.971882 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 14 21:49:54.971888 kernel: random: crng init done Jul 14 21:49:54.971895 kernel: ACPI: Early table checksum verification disabled Jul 14 21:49:54.971901 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 14 21:49:54.971908 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 14 21:49:54.971915 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:49:54.971922 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:49:54.971928 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:49:54.971934 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:49:54.971940 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:49:54.971948 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:49:54.971956 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:49:54.971963 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:49:54.971970 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 21:49:54.971977 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 14 21:49:54.971983 kernel: NUMA: Failed to initialise from firmware Jul 14 21:49:54.971990 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:49:54.971996 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 14 21:49:54.972003 kernel: Zone ranges: Jul 14 21:49:54.972009 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:49:54.972016 kernel: DMA32 empty Jul 14 21:49:54.972024 kernel: Normal empty Jul 14 21:49:54.972030 kernel: Movable zone start for each node Jul 14 21:49:54.972037 kernel: Early memory node ranges Jul 14 21:49:54.972044 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 14 21:49:54.972050 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 14 21:49:54.972057 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 14 21:49:54.972063 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 14 21:49:54.972069 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 14 21:49:54.972076 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 14 21:49:54.972083 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 14 21:49:54.972089 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 14 21:49:54.972096 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 14 21:49:54.972104 kernel: psci: probing for conduit method from ACPI. Jul 14 21:49:54.972110 kernel: psci: PSCIv1.1 detected in firmware. Jul 14 21:49:54.972117 kernel: psci: Using standard PSCI v0.2 function IDs Jul 14 21:49:54.972126 kernel: psci: Trusted OS migration not required Jul 14 21:49:54.972133 kernel: psci: SMC Calling Convention v1.1 Jul 14 21:49:54.972147 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 14 21:49:54.972156 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 14 21:49:54.972163 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 14 21:49:54.972237 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 14 21:49:54.972244 kernel: Detected PIPT I-cache on CPU0 Jul 14 21:49:54.972251 kernel: CPU features: detected: GIC system register CPU interface Jul 14 21:49:54.972258 kernel: CPU features: detected: Hardware dirty bit management Jul 14 21:49:54.972265 kernel: CPU features: detected: Spectre-v4 Jul 14 21:49:54.972272 kernel: CPU features: detected: Spectre-BHB Jul 14 21:49:54.972279 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 14 21:49:54.972286 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 14 21:49:54.972295 kernel: CPU features: detected: ARM erratum 1418040 Jul 14 21:49:54.972302 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 14 21:49:54.972309 kernel: alternatives: applying boot alternatives Jul 14 21:49:54.972317 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 21:49:54.972325 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 21:49:54.972331 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 21:49:54.972338 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 21:49:54.972345 kernel: Fallback order for Node 0: 0 Jul 14 21:49:54.972352 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 14 21:49:54.972359 kernel: Policy zone: DMA Jul 14 21:49:54.972366 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 21:49:54.972374 kernel: software IO TLB: area num 4. Jul 14 21:49:54.972381 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 14 21:49:54.972388 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 14 21:49:54.972395 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 21:49:54.972402 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 21:49:54.972410 kernel: rcu: RCU event tracing is enabled. Jul 14 21:49:54.972417 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 21:49:54.972424 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 21:49:54.972431 kernel: Tracing variant of Tasks RCU enabled. Jul 14 21:49:54.972438 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 21:49:54.972445 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 21:49:54.972451 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 14 21:49:54.972460 kernel: GICv3: 256 SPIs implemented Jul 14 21:49:54.972467 kernel: GICv3: 0 Extended SPIs implemented Jul 14 21:49:54.972474 kernel: Root IRQ handler: gic_handle_irq Jul 14 21:49:54.972480 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 14 21:49:54.972487 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 14 21:49:54.972494 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 14 21:49:54.972501 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 14 21:49:54.972508 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 14 21:49:54.972516 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 14 21:49:54.972523 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 14 21:49:54.972530 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 21:49:54.972539 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:49:54.972546 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 14 21:49:54.972553 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 14 21:49:54.972560 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 14 21:49:54.972567 kernel: arm-pv: using stolen time PV Jul 14 21:49:54.972574 kernel: Console: colour dummy device 80x25 Jul 14 21:49:54.972581 kernel: ACPI: Core revision 20230628 Jul 14 21:49:54.972589 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 14 21:49:54.972596 kernel: pid_max: default: 32768 minimum: 301 Jul 14 21:49:54.972604 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 21:49:54.972612 kernel: landlock: Up and running. Jul 14 21:49:54.972620 kernel: SELinux: Initializing. Jul 14 21:49:54.972627 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:49:54.972634 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 21:49:54.972641 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:49:54.972649 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 21:49:54.972656 kernel: rcu: Hierarchical SRCU implementation. Jul 14 21:49:54.972663 kernel: rcu: Max phase no-delay instances is 400. Jul 14 21:49:54.972670 kernel: Platform MSI: ITS@0x8080000 domain created Jul 14 21:49:54.972679 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 14 21:49:54.972686 kernel: Remapping and enabling EFI services. Jul 14 21:49:54.972693 kernel: smp: Bringing up secondary CPUs ... Jul 14 21:49:54.972700 kernel: Detected PIPT I-cache on CPU1 Jul 14 21:49:54.972707 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 14 21:49:54.972714 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 14 21:49:54.972722 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:49:54.972729 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 14 21:49:54.972736 kernel: Detected PIPT I-cache on CPU2 Jul 14 21:49:54.972743 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 14 21:49:54.972751 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 14 21:49:54.972759 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:49:54.972771 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 14 21:49:54.972780 kernel: Detected PIPT I-cache on CPU3 Jul 14 21:49:54.972787 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 14 21:49:54.972795 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 14 21:49:54.972802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 14 21:49:54.972809 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 14 21:49:54.972817 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 21:49:54.972826 kernel: SMP: Total of 4 processors activated. Jul 14 21:49:54.972834 kernel: CPU features: detected: 32-bit EL0 Support Jul 14 21:49:54.972841 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 14 21:49:54.972849 kernel: CPU features: detected: Common not Private translations Jul 14 21:49:54.972856 kernel: CPU features: detected: CRC32 instructions Jul 14 21:49:54.972864 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 14 21:49:54.972871 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 14 21:49:54.972879 kernel: CPU features: detected: LSE atomic instructions Jul 14 21:49:54.972888 kernel: CPU features: detected: Privileged Access Never Jul 14 21:49:54.972895 kernel: CPU features: detected: RAS Extension Support Jul 14 21:49:54.972903 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 14 21:49:54.972910 kernel: CPU: All CPU(s) started at EL1 Jul 14 21:49:54.972918 kernel: alternatives: applying system-wide alternatives Jul 14 21:49:54.972925 kernel: devtmpfs: initialized Jul 14 21:49:54.972933 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 21:49:54.972940 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 21:49:54.972948 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 21:49:54.972956 kernel: SMBIOS 3.0.0 present. Jul 14 21:49:54.972964 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 14 21:49:54.972972 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 21:49:54.972979 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 14 21:49:54.972987 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 14 21:49:54.972994 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 14 21:49:54.973002 kernel: audit: initializing netlink subsys (disabled) Jul 14 21:49:54.973009 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 14 21:49:54.973017 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 21:49:54.973025 kernel: cpuidle: using governor menu Jul 14 21:49:54.973033 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 14 21:49:54.973040 kernel: ASID allocator initialised with 32768 entries Jul 14 21:49:54.973048 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 21:49:54.973055 kernel: Serial: AMBA PL011 UART driver Jul 14 21:49:54.973063 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 14 21:49:54.973070 kernel: Modules: 0 pages in range for non-PLT usage Jul 14 21:49:54.973078 kernel: Modules: 509008 pages in range for PLT usage Jul 14 21:49:54.973085 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 21:49:54.973094 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 21:49:54.973102 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 14 21:49:54.973109 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 14 21:49:54.973116 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 21:49:54.973124 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 21:49:54.973131 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 14 21:49:54.973143 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 14 21:49:54.973151 kernel: ACPI: Added _OSI(Module Device) Jul 14 21:49:54.973159 kernel: ACPI: Added _OSI(Processor Device) Jul 14 21:49:54.973173 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 21:49:54.973182 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 21:49:54.973189 kernel: ACPI: Interpreter enabled Jul 14 21:49:54.973197 kernel: ACPI: Using GIC for interrupt routing Jul 14 21:49:54.973204 kernel: ACPI: MCFG table detected, 1 entries Jul 14 21:49:54.973212 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 14 21:49:54.973219 kernel: printk: console [ttyAMA0] enabled Jul 14 21:49:54.973227 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 21:49:54.973380 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 21:49:54.973461 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 14 21:49:54.973530 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 14 21:49:54.973598 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 14 21:49:54.973664 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 14 21:49:54.973675 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 14 21:49:54.973682 kernel: PCI host bridge to bus 0000:00 Jul 14 21:49:54.973755 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 14 21:49:54.973821 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 14 21:49:54.973882 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 14 21:49:54.973943 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 21:49:54.974026 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 14 21:49:54.974104 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 21:49:54.974197 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 14 21:49:54.974276 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 14 21:49:54.974346 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:49:54.974416 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 14 21:49:54.974485 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 14 21:49:54.974553 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 14 21:49:54.974618 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 14 21:49:54.974679 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 14 21:49:54.974743 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 14 21:49:54.974753 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 14 21:49:54.974761 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 14 21:49:54.974769 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 14 21:49:54.974776 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 14 21:49:54.974784 kernel: iommu: Default domain type: Translated Jul 14 21:49:54.974791 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 14 21:49:54.974799 kernel: efivars: Registered efivars operations Jul 14 21:49:54.974806 kernel: vgaarb: loaded Jul 14 21:49:54.974816 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 14 21:49:54.974824 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 21:49:54.974831 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 21:49:54.974839 kernel: pnp: PnP ACPI init Jul 14 21:49:54.974919 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 14 21:49:54.974930 kernel: pnp: PnP ACPI: found 1 devices Jul 14 21:49:54.974938 kernel: NET: Registered PF_INET protocol family Jul 14 21:49:54.974946 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 21:49:54.974956 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 21:49:54.974965 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 21:49:54.974973 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 21:49:54.974980 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 21:49:54.974988 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 21:49:54.974995 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:49:54.975003 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 21:49:54.975011 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 21:49:54.975019 kernel: PCI: CLS 0 bytes, default 64 Jul 14 21:49:54.975028 kernel: kvm [1]: HYP mode not available Jul 14 21:49:54.975036 kernel: Initialise system trusted keyrings Jul 14 21:49:54.975043 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 21:49:54.975051 kernel: Key type asymmetric registered Jul 14 21:49:54.975058 kernel: Asymmetric key parser 'x509' registered Jul 14 21:49:54.975066 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 14 21:49:54.975073 kernel: io scheduler mq-deadline registered Jul 14 21:49:54.975080 kernel: io scheduler kyber registered Jul 14 21:49:54.975088 kernel: io scheduler bfq registered Jul 14 21:49:54.975097 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 14 21:49:54.975105 kernel: ACPI: button: Power Button [PWRB] Jul 14 21:49:54.975112 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 14 21:49:54.975200 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 14 21:49:54.975211 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 21:49:54.975219 kernel: thunder_xcv, ver 1.0 Jul 14 21:49:54.975226 kernel: thunder_bgx, ver 1.0 Jul 14 21:49:54.975234 kernel: nicpf, ver 1.0 Jul 14 21:49:54.975241 kernel: nicvf, ver 1.0 Jul 14 21:49:54.975322 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 14 21:49:54.975390 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-14T21:49:54 UTC (1752529794) Jul 14 21:49:54.975401 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 14 21:49:54.975409 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 14 21:49:54.975416 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 14 21:49:54.975424 kernel: watchdog: Hard watchdog permanently disabled Jul 14 21:49:54.975431 kernel: NET: Registered PF_INET6 protocol family Jul 14 21:49:54.975439 kernel: Segment Routing with IPv6 Jul 14 21:49:54.975449 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 21:49:54.975456 kernel: NET: Registered PF_PACKET protocol family Jul 14 21:49:54.975464 kernel: Key type dns_resolver registered Jul 14 21:49:54.975471 kernel: registered taskstats version 1 Jul 14 21:49:54.975479 kernel: Loading compiled-in X.509 certificates Jul 14 21:49:54.975487 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: 0878f879bf0f15203fd920e9f7d6346db298c301' Jul 14 21:49:54.975494 kernel: Key type .fscrypt registered Jul 14 21:49:54.975501 kernel: Key type fscrypt-provisioning registered Jul 14 21:49:54.975509 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 21:49:54.975518 kernel: ima: Allocated hash algorithm: sha1 Jul 14 21:49:54.975526 kernel: ima: No architecture policies found Jul 14 21:49:54.975533 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 14 21:49:54.975541 kernel: clk: Disabling unused clocks Jul 14 21:49:54.975548 kernel: Freeing unused kernel memory: 39424K Jul 14 21:49:54.975556 kernel: Run /init as init process Jul 14 21:49:54.975563 kernel: with arguments: Jul 14 21:49:54.975571 kernel: /init Jul 14 21:49:54.975578 kernel: with environment: Jul 14 21:49:54.975587 kernel: HOME=/ Jul 14 21:49:54.975594 kernel: TERM=linux Jul 14 21:49:54.975601 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 21:49:54.975610 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 21:49:54.975620 systemd[1]: Detected virtualization kvm. Jul 14 21:49:54.975628 systemd[1]: Detected architecture arm64. Jul 14 21:49:54.975636 systemd[1]: Running in initrd. Jul 14 21:49:54.975645 systemd[1]: No hostname configured, using default hostname. Jul 14 21:49:54.975653 systemd[1]: Hostname set to . Jul 14 21:49:54.975662 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:49:54.975670 systemd[1]: Queued start job for default target initrd.target. Jul 14 21:49:54.975678 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:49:54.975687 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:49:54.975695 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 21:49:54.975703 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:49:54.975713 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 21:49:54.975722 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 21:49:54.975731 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 21:49:54.975739 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 21:49:54.975748 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:49:54.975756 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:49:54.975764 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:49:54.975774 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:49:54.975782 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:49:54.975790 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:49:54.975798 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:49:54.975806 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:49:54.975814 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 21:49:54.975822 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 21:49:54.975830 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:49:54.975838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:49:54.975848 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:49:54.975857 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:49:54.975865 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 21:49:54.975873 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:49:54.975881 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 21:49:54.975888 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 21:49:54.975897 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:49:54.975905 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:49:54.975915 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:49:54.975923 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:49:54.975931 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 21:49:54.975939 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 21:49:54.975947 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 21:49:54.975957 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 21:49:54.975965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:49:54.975974 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:49:54.975982 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:49:54.976007 systemd-journald[235]: Collecting audit messages is disabled. Jul 14 21:49:54.976029 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:49:54.976037 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 21:49:54.976046 systemd-journald[235]: Journal started Jul 14 21:49:54.976066 systemd-journald[235]: Runtime Journal (/run/log/journal/74d4c32995924905851578198b89a5a2) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:49:54.941498 systemd-modules-load[236]: Inserted module 'overlay' Jul 14 21:49:54.978810 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:49:54.979469 systemd-modules-load[236]: Inserted module 'br_netfilter' Jul 14 21:49:54.980373 kernel: Bridge firewalling registered Jul 14 21:49:54.980439 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:49:54.981775 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:49:54.997363 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 21:49:54.999073 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:49:55.001350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:49:55.010213 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:49:55.012229 dracut-cmdline[267]: dracut-dracut-053 Jul 14 21:49:55.012564 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:49:55.015716 dracut-cmdline[267]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=219fd31147cccfc1f4834c1854a4109714661cabce52e86d5c93000af393c45b Jul 14 21:49:55.030405 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:49:55.062016 systemd-resolved[293]: Positive Trust Anchors: Jul 14 21:49:55.062031 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:49:55.062063 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:49:55.069829 systemd-resolved[293]: Defaulting to hostname 'linux'. Jul 14 21:49:55.072881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:49:55.075371 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:49:55.100204 kernel: SCSI subsystem initialized Jul 14 21:49:55.105189 kernel: Loading iSCSI transport class v2.0-870. Jul 14 21:49:55.112204 kernel: iscsi: registered transport (tcp) Jul 14 21:49:55.127188 kernel: iscsi: registered transport (qla4xxx) Jul 14 21:49:55.127212 kernel: QLogic iSCSI HBA Driver Jul 14 21:49:55.182266 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 21:49:55.192336 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 21:49:55.209324 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 21:49:55.210234 kernel: device-mapper: uevent: version 1.0.3 Jul 14 21:49:55.210247 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 21:49:55.262300 kernel: raid6: neonx8 gen() 15448 MB/s Jul 14 21:49:55.279225 kernel: raid6: neonx4 gen() 15240 MB/s Jul 14 21:49:55.296205 kernel: raid6: neonx2 gen() 13278 MB/s Jul 14 21:49:55.313193 kernel: raid6: neonx1 gen() 10478 MB/s Jul 14 21:49:55.330212 kernel: raid6: int64x8 gen() 6965 MB/s Jul 14 21:49:55.347192 kernel: raid6: int64x4 gen() 7341 MB/s Jul 14 21:49:55.364190 kernel: raid6: int64x2 gen() 6115 MB/s Jul 14 21:49:55.381339 kernel: raid6: int64x1 gen() 5052 MB/s Jul 14 21:49:55.381356 kernel: raid6: using algorithm neonx8 gen() 15448 MB/s Jul 14 21:49:55.399312 kernel: raid6: .... xor() 11886 MB/s, rmw enabled Jul 14 21:49:55.399346 kernel: raid6: using neon recovery algorithm Jul 14 21:49:55.404194 kernel: xor: measuring software checksum speed Jul 14 21:49:55.405500 kernel: 8regs : 17359 MB/sec Jul 14 21:49:55.405513 kernel: 32regs : 19669 MB/sec Jul 14 21:49:55.406749 kernel: arm64_neon : 26989 MB/sec Jul 14 21:49:55.406760 kernel: xor: using function: arm64_neon (26989 MB/sec) Jul 14 21:49:55.464209 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 21:49:55.479095 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:49:55.492340 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:49:55.505939 systemd-udevd[462]: Using default interface naming scheme 'v255'. Jul 14 21:49:55.509115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:49:55.526358 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 21:49:55.538977 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jul 14 21:49:55.566249 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:49:55.580362 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:49:55.619334 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:49:55.629515 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 21:49:55.641255 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 21:49:55.642871 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:49:55.645027 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:49:55.646122 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:49:55.654358 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 21:49:55.663860 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 14 21:49:55.664035 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 21:49:55.669224 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:49:55.673901 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 21:49:55.673931 kernel: GPT:9289727 != 19775487 Jul 14 21:49:55.673941 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 21:49:55.673950 kernel: GPT:9289727 != 19775487 Jul 14 21:49:55.673960 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 21:49:55.673970 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:49:55.677730 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:49:55.677834 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:49:55.679284 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:49:55.680348 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:49:55.680484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:49:55.683794 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:49:55.694491 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (506) Jul 14 21:49:55.694365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:49:55.705451 kernel: BTRFS: device fsid a239cc51-2249-4f1a-8861-421a0d84a369 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (522) Jul 14 21:49:55.711282 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 21:49:55.712729 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:49:55.718288 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:49:55.722831 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 21:49:55.726692 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 21:49:55.727867 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 21:49:55.742380 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 21:49:55.744164 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 21:49:55.749907 disk-uuid[551]: Primary Header is updated. Jul 14 21:49:55.749907 disk-uuid[551]: Secondary Entries is updated. Jul 14 21:49:55.749907 disk-uuid[551]: Secondary Header is updated. Jul 14 21:49:55.757384 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:49:55.762222 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:49:55.773033 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:49:56.762212 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 21:49:56.762361 disk-uuid[552]: The operation has completed successfully. Jul 14 21:49:56.782880 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 21:49:56.782987 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 21:49:56.805378 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 21:49:56.808435 sh[573]: Success Jul 14 21:49:56.822213 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 14 21:49:56.864779 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 21:49:56.878594 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 21:49:56.880027 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 21:49:56.895957 kernel: BTRFS info (device dm-0): first mount of filesystem a239cc51-2249-4f1a-8861-421a0d84a369 Jul 14 21:49:56.896003 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:49:56.896015 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 21:49:56.898185 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 21:49:56.898207 kernel: BTRFS info (device dm-0): using free space tree Jul 14 21:49:56.902225 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 21:49:56.903585 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 21:49:56.916504 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 21:49:56.919247 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 21:49:56.926559 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:49:56.926599 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:49:56.926611 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:49:56.930207 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:49:56.938807 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 21:49:56.941203 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:49:56.946892 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 21:49:56.958325 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 21:49:57.053470 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:49:57.063436 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:49:57.070590 ignition[665]: Ignition 2.19.0 Jul 14 21:49:57.070599 ignition[665]: Stage: fetch-offline Jul 14 21:49:57.070631 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:49:57.070644 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:49:57.070794 ignition[665]: parsed url from cmdline: "" Jul 14 21:49:57.070797 ignition[665]: no config URL provided Jul 14 21:49:57.070802 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 21:49:57.070808 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jul 14 21:49:57.070833 ignition[665]: op(1): [started] loading QEMU firmware config module Jul 14 21:49:57.070838 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 21:49:57.085557 ignition[665]: op(1): [finished] loading QEMU firmware config module Jul 14 21:49:57.096671 systemd-networkd[763]: lo: Link UP Jul 14 21:49:57.096682 systemd-networkd[763]: lo: Gained carrier Jul 14 21:49:57.097818 systemd-networkd[763]: Enumeration completed Jul 14 21:49:57.098117 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:49:57.098262 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:49:57.098265 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:49:57.099663 systemd-networkd[763]: eth0: Link UP Jul 14 21:49:57.099666 systemd-networkd[763]: eth0: Gained carrier Jul 14 21:49:57.099676 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:49:57.099994 systemd[1]: Reached target network.target - Network. Jul 14 21:49:57.117241 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:49:57.136618 ignition[665]: parsing config with SHA512: bb7b328e056ad2d060223658046a895116e4799283b28ced874568d494ad3a097194ff78505db87393899973dfad0c4e2f7e33304c234acb4e2992dc4ba1260f Jul 14 21:49:57.140906 unknown[665]: fetched base config from "system" Jul 14 21:49:57.140916 unknown[665]: fetched user config from "qemu" Jul 14 21:49:57.141342 ignition[665]: fetch-offline: fetch-offline passed Jul 14 21:49:57.143324 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:49:57.141401 ignition[665]: Ignition finished successfully Jul 14 21:49:57.147155 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 21:49:57.152407 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 21:49:57.165699 ignition[769]: Ignition 2.19.0 Jul 14 21:49:57.165708 ignition[769]: Stage: kargs Jul 14 21:49:57.165879 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:49:57.165888 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:49:57.169892 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 21:49:57.166811 ignition[769]: kargs: kargs passed Jul 14 21:49:57.166853 ignition[769]: Ignition finished successfully Jul 14 21:49:57.177342 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 21:49:57.190363 ignition[776]: Ignition 2.19.0 Jul 14 21:49:57.190373 ignition[776]: Stage: disks Jul 14 21:49:57.190560 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 14 21:49:57.193350 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 21:49:57.190570 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:49:57.194415 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 21:49:57.191427 ignition[776]: disks: disks passed Jul 14 21:49:57.196129 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 21:49:57.191473 ignition[776]: Ignition finished successfully Jul 14 21:49:57.198152 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:49:57.200047 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:49:57.201686 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:49:57.210340 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 21:49:57.220574 systemd-resolved[293]: Detected conflict on linux IN A 10.0.0.52 Jul 14 21:49:57.220585 systemd-resolved[293]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 14 21:49:57.223664 systemd-fsck[786]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 21:49:57.230251 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 21:49:57.234104 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 21:49:57.280188 kernel: EXT4-fs (vda9): mounted filesystem a9f35e2f-e295-4589-8fb4-4b611a8bb71c r/w with ordered data mode. Quota mode: none. Jul 14 21:49:57.280953 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 21:49:57.282345 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 21:49:57.293278 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:49:57.295633 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 21:49:57.296717 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 21:49:57.296773 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 21:49:57.296794 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:49:57.302894 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 21:49:57.306474 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 21:49:57.309801 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (794) Jul 14 21:49:57.311220 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:49:57.311240 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:49:57.311966 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:49:57.318721 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:49:57.317705 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:49:57.357486 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 21:49:57.362179 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Jul 14 21:49:57.366325 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 21:49:57.370233 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 21:49:57.442712 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 21:49:57.451325 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 21:49:57.453607 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 21:49:57.459186 kernel: BTRFS info (device vda6): last unmount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:49:57.474860 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 21:49:57.477504 ignition[906]: INFO : Ignition 2.19.0 Jul 14 21:49:57.478410 ignition[906]: INFO : Stage: mount Jul 14 21:49:57.478410 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:49:57.478410 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:49:57.481217 ignition[906]: INFO : mount: mount passed Jul 14 21:49:57.481217 ignition[906]: INFO : Ignition finished successfully Jul 14 21:49:57.482204 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 21:49:57.493257 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 21:49:57.894860 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 21:49:57.909385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 21:49:57.915190 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (920) Jul 14 21:49:57.917418 kernel: BTRFS info (device vda6): first mount of filesystem a813e27e-7b70-4c75-b1e9-ccef805dad93 Jul 14 21:49:57.917448 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 14 21:49:57.917459 kernel: BTRFS info (device vda6): using free space tree Jul 14 21:49:57.920193 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 21:49:57.921470 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 21:49:57.948840 ignition[937]: INFO : Ignition 2.19.0 Jul 14 21:49:57.948840 ignition[937]: INFO : Stage: files Jul 14 21:49:57.950755 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:49:57.950755 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:49:57.950755 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Jul 14 21:49:57.954353 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 21:49:57.954353 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 21:49:57.957675 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 21:49:57.959111 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 21:49:57.959111 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 21:49:57.958305 unknown[937]: wrote ssh authorized keys file for user: core Jul 14 21:49:57.962990 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 14 21:49:57.962990 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 14 21:49:58.015086 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 21:49:58.164579 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 14 21:49:58.164579 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:49:58.168157 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 14 21:49:58.484352 systemd-networkd[763]: eth0: Gained IPv6LL Jul 14 21:49:58.566892 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 14 21:49:58.941946 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 14 21:49:58.941946 ignition[937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 14 21:49:58.945611 ignition[937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:49:58.945611 ignition[937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 21:49:58.945611 ignition[937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 14 21:49:58.945611 ignition[937]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 14 21:49:58.945611 ignition[937]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:49:58.945611 ignition[937]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 21:49:58.945611 ignition[937]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 14 21:49:58.945611 ignition[937]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 21:49:58.971727 ignition[937]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:49:58.976096 ignition[937]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 21:49:58.978785 ignition[937]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 21:49:58.978785 ignition[937]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 14 21:49:58.978785 ignition[937]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 21:49:58.978785 ignition[937]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:49:58.978785 ignition[937]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 21:49:58.978785 ignition[937]: INFO : files: files passed Jul 14 21:49:58.978785 ignition[937]: INFO : Ignition finished successfully Jul 14 21:49:58.980541 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 21:49:58.995346 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 21:49:58.997735 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 21:49:58.999633 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 21:49:58.999723 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 21:49:59.005630 initrd-setup-root-after-ignition[965]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 21:49:59.009028 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:49:59.009028 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:49:59.012650 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 21:49:59.013705 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:49:59.016531 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 21:49:59.024352 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 21:49:59.046524 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 21:49:59.046644 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 21:49:59.048933 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 21:49:59.050787 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 21:49:59.052603 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 21:49:59.060339 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 21:49:59.071767 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:49:59.074245 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 21:49:59.086306 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:49:59.087559 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:49:59.089605 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 21:49:59.091348 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 21:49:59.091475 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 21:49:59.094019 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 21:49:59.096082 systemd[1]: Stopped target basic.target - Basic System. Jul 14 21:49:59.097750 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 21:49:59.099424 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 21:49:59.101346 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 21:49:59.103319 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 21:49:59.105184 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 21:49:59.107198 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 21:49:59.109185 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 21:49:59.110909 systemd[1]: Stopped target swap.target - Swaps. Jul 14 21:49:59.112481 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 21:49:59.112604 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 21:49:59.114907 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:49:59.116829 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:49:59.118734 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 21:49:59.122229 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:49:59.123472 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 21:49:59.123588 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 21:49:59.126536 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 21:49:59.126655 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 21:49:59.128656 systemd[1]: Stopped target paths.target - Path Units. Jul 14 21:49:59.130245 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 21:49:59.134226 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:49:59.135472 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 21:49:59.137521 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 21:49:59.139049 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 21:49:59.139146 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 21:49:59.140670 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 21:49:59.140753 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 21:49:59.142299 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 21:49:59.142414 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 21:49:59.144239 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 21:49:59.144344 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 21:49:59.157358 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 21:49:59.158295 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 21:49:59.158439 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:49:59.163395 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 21:49:59.164281 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 21:49:59.164428 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:49:59.171475 ignition[992]: INFO : Ignition 2.19.0 Jul 14 21:49:59.171475 ignition[992]: INFO : Stage: umount Jul 14 21:49:59.171475 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 21:49:59.171475 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 21:49:59.171475 ignition[992]: INFO : umount: umount passed Jul 14 21:49:59.171475 ignition[992]: INFO : Ignition finished successfully Jul 14 21:49:59.168507 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 21:49:59.168615 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 21:49:59.173001 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 21:49:59.174199 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 21:49:59.178862 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 21:49:59.179430 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 21:49:59.179533 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 21:49:59.181952 systemd[1]: Stopped target network.target - Network. Jul 14 21:49:59.183456 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 21:49:59.183529 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 21:49:59.185349 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 21:49:59.185394 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 21:49:59.187070 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 21:49:59.187116 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 21:49:59.188844 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 21:49:59.188895 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 21:49:59.190809 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 21:49:59.195145 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 21:49:59.207272 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 21:49:59.208500 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 21:49:59.211222 systemd-networkd[763]: eth0: DHCPv6 lease lost Jul 14 21:49:59.211266 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 21:49:59.211343 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:49:59.213626 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 21:49:59.213729 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 21:49:59.215923 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 21:49:59.215984 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:49:59.225317 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 21:49:59.226347 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 21:49:59.226423 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 21:49:59.228472 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 21:49:59.228525 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:49:59.230533 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 21:49:59.230588 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 21:49:59.234859 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:49:59.238574 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 21:49:59.238670 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 21:49:59.243509 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 21:49:59.243563 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 21:49:59.250306 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 21:49:59.250437 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 21:49:59.256030 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 21:49:59.256217 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:49:59.258639 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 21:49:59.258683 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 21:49:59.260615 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 21:49:59.260655 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:49:59.262475 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 21:49:59.262531 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 21:49:59.265434 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 21:49:59.265484 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 21:49:59.268284 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 21:49:59.268334 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 21:49:59.284348 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 21:49:59.285454 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 21:49:59.285523 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:49:59.287640 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 21:49:59.287695 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:49:59.291899 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 21:49:59.292015 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 21:49:59.295519 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 21:49:59.298019 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 21:49:59.308597 systemd[1]: Switching root. Jul 14 21:49:59.333592 systemd-journald[235]: Journal stopped Jul 14 21:50:00.098924 systemd-journald[235]: Received SIGTERM from PID 1 (systemd). Jul 14 21:50:00.098978 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 21:50:00.098992 kernel: SELinux: policy capability open_perms=1 Jul 14 21:50:00.099003 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 21:50:00.099013 kernel: SELinux: policy capability always_check_network=0 Jul 14 21:50:00.099024 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 21:50:00.099037 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 21:50:00.099050 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 21:50:00.099071 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 21:50:00.099085 kernel: audit: type=1403 audit(1752529799.494:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 21:50:00.099100 systemd[1]: Successfully loaded SELinux policy in 32.533ms. Jul 14 21:50:00.099117 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.827ms. Jul 14 21:50:00.099142 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 21:50:00.099155 systemd[1]: Detected virtualization kvm. Jul 14 21:50:00.099180 systemd[1]: Detected architecture arm64. Jul 14 21:50:00.099192 systemd[1]: Detected first boot. Jul 14 21:50:00.099204 systemd[1]: Initializing machine ID from VM UUID. Jul 14 21:50:00.099214 zram_generator::config[1038]: No configuration found. Jul 14 21:50:00.099226 systemd[1]: Populated /etc with preset unit settings. Jul 14 21:50:00.099240 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 21:50:00.099250 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 21:50:00.099261 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 21:50:00.099272 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 21:50:00.099287 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 21:50:00.099298 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 21:50:00.099308 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 21:50:00.099320 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 21:50:00.099331 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 21:50:00.099341 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 21:50:00.099351 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 21:50:00.099362 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 21:50:00.099375 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 21:50:00.099386 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 21:50:00.099396 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 21:50:00.099407 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 21:50:00.099418 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 21:50:00.099429 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 14 21:50:00.099439 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 21:50:00.099450 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 21:50:00.099460 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 21:50:00.099473 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 21:50:00.099483 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 21:50:00.099494 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 21:50:00.099505 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 21:50:00.099516 systemd[1]: Reached target slices.target - Slice Units. Jul 14 21:50:00.099527 systemd[1]: Reached target swap.target - Swaps. Jul 14 21:50:00.099537 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 21:50:00.099548 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 21:50:00.099560 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 21:50:00.099571 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 21:50:00.099581 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 21:50:00.099592 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 21:50:00.099603 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 21:50:00.099613 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 21:50:00.099624 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 21:50:00.099634 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 21:50:00.099645 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 21:50:00.099657 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 21:50:00.099668 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 21:50:00.099679 systemd[1]: Reached target machines.target - Containers. Jul 14 21:50:00.099689 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 21:50:00.099700 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:50:00.099711 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 21:50:00.099721 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 21:50:00.099732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:50:00.099745 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:50:00.099755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:50:00.099766 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 21:50:00.099776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:50:00.099787 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 21:50:00.099797 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 21:50:00.099808 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 21:50:00.099819 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 21:50:00.099829 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 21:50:00.099841 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 21:50:00.099852 kernel: loop: module loaded Jul 14 21:50:00.099862 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 21:50:00.099872 kernel: fuse: init (API version 7.39) Jul 14 21:50:00.099882 kernel: ACPI: bus type drm_connector registered Jul 14 21:50:00.099892 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 21:50:00.099902 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 21:50:00.099913 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 21:50:00.099923 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 21:50:00.099936 systemd[1]: Stopped verity-setup.service. Jul 14 21:50:00.099946 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 21:50:00.099957 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 21:50:00.099984 systemd-journald[1105]: Collecting audit messages is disabled. Jul 14 21:50:00.100008 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 21:50:00.100019 systemd-journald[1105]: Journal started Jul 14 21:50:00.100040 systemd-journald[1105]: Runtime Journal (/run/log/journal/74d4c32995924905851578198b89a5a2) is 5.9M, max 47.3M, 41.4M free. Jul 14 21:49:59.896562 systemd[1]: Queued start job for default target multi-user.target. Jul 14 21:49:59.910201 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 21:49:59.910552 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 21:50:00.103639 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 21:50:00.104279 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 21:50:00.105437 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 21:50:00.106668 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 21:50:00.109206 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 21:50:00.110626 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 21:50:00.112187 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 21:50:00.112340 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 21:50:00.113737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:50:00.113882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:50:00.115340 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:50:00.115516 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:50:00.116793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:50:00.116944 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:50:00.118422 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 21:50:00.118557 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 21:50:00.120054 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:50:00.120241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:50:00.121598 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 21:50:00.122972 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 21:50:00.124659 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 21:50:00.137332 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 21:50:00.149316 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 21:50:00.151346 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 21:50:00.152435 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 21:50:00.152473 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 21:50:00.154375 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 21:50:00.156508 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 21:50:00.158588 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 21:50:00.159703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:50:00.161014 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 21:50:00.164355 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 21:50:00.165559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:50:00.167353 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 21:50:00.168437 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:50:00.169775 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 21:50:00.174362 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 21:50:00.181027 systemd-journald[1105]: Time spent on flushing to /var/log/journal/74d4c32995924905851578198b89a5a2 is 29.371ms for 855 entries. Jul 14 21:50:00.181027 systemd-journald[1105]: System Journal (/var/log/journal/74d4c32995924905851578198b89a5a2) is 8.0M, max 195.6M, 187.6M free. Jul 14 21:50:00.225404 systemd-journald[1105]: Received client request to flush runtime journal. Jul 14 21:50:00.225688 kernel: loop0: detected capacity change from 0 to 114328 Jul 14 21:50:00.225743 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 21:50:00.177518 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 21:50:00.179868 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 21:50:00.182847 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 21:50:00.184498 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 21:50:00.186229 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 21:50:00.189370 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 21:50:00.193337 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 21:50:00.206321 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 21:50:00.210239 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 21:50:00.213225 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 21:50:00.222535 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 21:50:00.233457 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 21:50:00.235414 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 21:50:00.236925 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 21:50:00.238728 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 21:50:00.245281 kernel: loop1: detected capacity change from 0 to 207008 Jul 14 21:50:00.246594 udevadm[1162]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 14 21:50:00.258472 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Jul 14 21:50:00.258492 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Jul 14 21:50:00.261904 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 21:50:00.279197 kernel: loop2: detected capacity change from 0 to 114432 Jul 14 21:50:00.305193 kernel: loop3: detected capacity change from 0 to 114328 Jul 14 21:50:00.309195 kernel: loop4: detected capacity change from 0 to 207008 Jul 14 21:50:00.315200 kernel: loop5: detected capacity change from 0 to 114432 Jul 14 21:50:00.317896 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 21:50:00.318296 (sd-merge)[1174]: Merged extensions into '/usr'. Jul 14 21:50:00.321844 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 21:50:00.321949 systemd[1]: Reloading... Jul 14 21:50:00.368704 zram_generator::config[1196]: No configuration found. Jul 14 21:50:00.453266 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 21:50:00.473853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:50:00.510157 systemd[1]: Reloading finished in 187 ms. Jul 14 21:50:00.546704 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 21:50:00.550195 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 21:50:00.562334 systemd[1]: Starting ensure-sysext.service... Jul 14 21:50:00.564254 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 21:50:00.576006 systemd[1]: Reloading requested from client PID 1234 ('systemctl') (unit ensure-sysext.service)... Jul 14 21:50:00.576022 systemd[1]: Reloading... Jul 14 21:50:00.584622 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 21:50:00.585304 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 21:50:00.586031 systemd-tmpfiles[1235]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 21:50:00.586779 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jul 14 21:50:00.586846 systemd-tmpfiles[1235]: ACLs are not supported, ignoring. Jul 14 21:50:00.589282 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:50:00.589297 systemd-tmpfiles[1235]: Skipping /boot Jul 14 21:50:00.596078 systemd-tmpfiles[1235]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 21:50:00.596098 systemd-tmpfiles[1235]: Skipping /boot Jul 14 21:50:00.626193 zram_generator::config[1260]: No configuration found. Jul 14 21:50:00.716552 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:50:00.753502 systemd[1]: Reloading finished in 177 ms. Jul 14 21:50:00.765316 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 21:50:00.780254 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 21:50:00.790118 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 21:50:00.792986 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 21:50:00.795645 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 21:50:00.802283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 21:50:00.805948 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 21:50:00.812572 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 21:50:00.819690 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:50:00.831466 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:50:00.836389 systemd-udevd[1309]: Using default interface naming scheme 'v255'. Jul 14 21:50:00.836466 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:50:00.840726 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:50:00.842901 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:50:00.846504 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 21:50:00.850227 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 21:50:00.852023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:50:00.852238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:50:00.866439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:50:00.866591 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:50:00.870388 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 21:50:00.872365 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 21:50:00.874050 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:50:00.874344 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:50:00.887505 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 21:50:00.887645 augenrules[1334]: No rules Jul 14 21:50:00.889791 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 21:50:00.899006 systemd[1]: Finished ensure-sysext.service. Jul 14 21:50:00.901830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 21:50:00.910440 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 21:50:00.913629 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 21:50:00.918027 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 21:50:00.921381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 21:50:00.922844 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 21:50:00.926345 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 21:50:00.933527 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 21:50:00.945366 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 21:50:00.946495 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 21:50:00.946775 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 21:50:00.952322 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1355) Jul 14 21:50:00.956623 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 21:50:00.959971 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 21:50:00.963835 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 21:50:00.965021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 21:50:00.968707 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 21:50:00.969679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 21:50:00.973613 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 21:50:00.975247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 21:50:00.984966 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 21:50:00.996703 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 14 21:50:01.007431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 21:50:01.019606 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 21:50:01.021156 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 21:50:01.021238 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 21:50:01.039580 systemd-resolved[1303]: Positive Trust Anchors: Jul 14 21:50:01.039600 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 21:50:01.039634 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 21:50:01.052584 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 21:50:01.054032 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 21:50:01.059109 systemd-resolved[1303]: Defaulting to hostname 'linux'. Jul 14 21:50:01.059369 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 21:50:01.068890 systemd-networkd[1364]: lo: Link UP Jul 14 21:50:01.068908 systemd-networkd[1364]: lo: Gained carrier Jul 14 21:50:01.069680 systemd-networkd[1364]: Enumeration completed Jul 14 21:50:01.075471 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 21:50:01.076704 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 21:50:01.078156 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 21:50:01.079669 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:50:01.079676 systemd-networkd[1364]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 21:50:01.079681 systemd[1]: Reached target network.target - Network. Jul 14 21:50:01.080521 systemd-networkd[1364]: eth0: Link UP Jul 14 21:50:01.080529 systemd-networkd[1364]: eth0: Gained carrier Jul 14 21:50:01.080542 systemd-networkd[1364]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 21:50:01.080791 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 21:50:01.083325 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 21:50:01.091485 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 21:50:01.096370 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 21:50:01.104236 systemd-networkd[1364]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 21:50:01.104863 systemd-timesyncd[1365]: Network configuration changed, trying to establish connection. Jul 14 21:50:01.576245 systemd-resolved[1303]: Clock change detected. Flushing caches. Jul 14 21:50:01.576363 systemd-timesyncd[1365]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 21:50:01.576420 systemd-timesyncd[1365]: Initial clock synchronization to Mon 2025-07-14 21:50:01.575732 UTC. Jul 14 21:50:01.594629 lvm[1390]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:50:01.615646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 21:50:01.632550 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 21:50:01.636230 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 21:50:01.637390 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 21:50:01.638550 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 21:50:01.639752 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 21:50:01.641162 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 21:50:01.642327 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 21:50:01.643572 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 21:50:01.644960 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 21:50:01.644999 systemd[1]: Reached target paths.target - Path Units. Jul 14 21:50:01.645885 systemd[1]: Reached target timers.target - Timer Units. Jul 14 21:50:01.647709 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 21:50:01.650082 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 21:50:01.669685 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 21:50:01.672011 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 21:50:01.673675 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 21:50:01.674803 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 21:50:01.675716 systemd[1]: Reached target basic.target - Basic System. Jul 14 21:50:01.676642 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:50:01.676676 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 21:50:01.677584 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 21:50:01.679473 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 21:50:01.681680 lvm[1397]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 21:50:01.682594 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 21:50:01.688538 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 21:50:01.689586 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 21:50:01.690618 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 21:50:01.695179 jq[1400]: false Jul 14 21:50:01.694725 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 21:50:01.699881 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 21:50:01.702909 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 21:50:01.710850 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 21:50:01.716711 extend-filesystems[1401]: Found loop3 Jul 14 21:50:01.716711 extend-filesystems[1401]: Found loop4 Jul 14 21:50:01.716711 extend-filesystems[1401]: Found loop5 Jul 14 21:50:01.716711 extend-filesystems[1401]: Found vda Jul 14 21:50:01.716711 extend-filesystems[1401]: Found vda1 Jul 14 21:50:01.716711 extend-filesystems[1401]: Found vda2 Jul 14 21:50:01.716711 extend-filesystems[1401]: Found vda3 Jul 14 21:50:01.716711 extend-filesystems[1401]: Found usr Jul 14 21:50:01.716711 extend-filesystems[1401]: Found vda4 Jul 14 21:50:01.716711 extend-filesystems[1401]: Found vda6 Jul 14 21:50:01.716711 extend-filesystems[1401]: Found vda7 Jul 14 21:50:01.716711 extend-filesystems[1401]: Found vda9 Jul 14 21:50:01.716711 extend-filesystems[1401]: Checking size of /dev/vda9 Jul 14 21:50:01.732462 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 21:50:01.733084 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 21:50:01.735997 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 21:50:01.740655 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 21:50:01.743015 dbus-daemon[1399]: [system] SELinux support is enabled Jul 14 21:50:01.743947 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 21:50:01.747642 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 21:50:01.751646 extend-filesystems[1401]: Resized partition /dev/vda9 Jul 14 21:50:01.750385 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 21:50:01.750538 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 21:50:01.750816 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 21:50:01.750949 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 21:50:01.754266 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 21:50:01.754486 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 21:50:01.764275 jq[1420]: true Jul 14 21:50:01.776089 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 21:50:01.776134 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 21:50:01.780702 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1341) Jul 14 21:50:01.777611 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 21:50:01.777666 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 21:50:01.778832 (ntainerd)[1433]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 21:50:01.782345 jq[1431]: true Jul 14 21:50:01.794591 extend-filesystems[1423]: resize2fs 1.47.1 (20-May-2024) Jul 14 21:50:01.795815 tar[1424]: linux-arm64/LICENSE Jul 14 21:50:01.796075 tar[1424]: linux-arm64/helm Jul 14 21:50:01.798599 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 21:50:01.810866 systemd-logind[1412]: Watching system buttons on /dev/input/event0 (Power Button) Jul 14 21:50:01.811500 systemd-logind[1412]: New seat seat0. Jul 14 21:50:01.818284 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 21:50:01.828748 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 21:50:01.837853 update_engine[1418]: I20250714 21:50:01.837260 1418 main.cc:92] Flatcar Update Engine starting Jul 14 21:50:01.849760 extend-filesystems[1423]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 21:50:01.849760 extend-filesystems[1423]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 21:50:01.849760 extend-filesystems[1423]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 21:50:01.855880 extend-filesystems[1401]: Resized filesystem in /dev/vda9 Jul 14 21:50:01.866746 update_engine[1418]: I20250714 21:50:01.854363 1418 update_check_scheduler.cc:74] Next update check in 6m16s Jul 14 21:50:01.851486 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 21:50:01.852611 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 21:50:01.856002 systemd[1]: Started update-engine.service - Update Engine. Jul 14 21:50:01.874809 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 21:50:01.884262 bash[1452]: Updated "/home/core/.ssh/authorized_keys" Jul 14 21:50:01.909049 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 21:50:01.911430 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 21:50:01.947664 locksmithd[1459]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 21:50:02.046371 containerd[1433]: time="2025-07-14T21:50:02.046284498Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 21:50:02.081197 containerd[1433]: time="2025-07-14T21:50:02.081149378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:50:02.082875 containerd[1433]: time="2025-07-14T21:50:02.082833538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:50:02.082910 containerd[1433]: time="2025-07-14T21:50:02.082874538Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 21:50:02.082910 containerd[1433]: time="2025-07-14T21:50:02.082893298Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 21:50:02.083075 containerd[1433]: time="2025-07-14T21:50:02.083053458Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 21:50:02.083101 containerd[1433]: time="2025-07-14T21:50:02.083076338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 21:50:02.083692 containerd[1433]: time="2025-07-14T21:50:02.083131618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:50:02.083692 containerd[1433]: time="2025-07-14T21:50:02.083147658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:50:02.083692 containerd[1433]: time="2025-07-14T21:50:02.083309858Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:50:02.083692 containerd[1433]: time="2025-07-14T21:50:02.083324378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 21:50:02.083692 containerd[1433]: time="2025-07-14T21:50:02.083337178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:50:02.083692 containerd[1433]: time="2025-07-14T21:50:02.083347178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 21:50:02.083692 containerd[1433]: time="2025-07-14T21:50:02.083421818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:50:02.083692 containerd[1433]: time="2025-07-14T21:50:02.083642858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 21:50:02.083893 containerd[1433]: time="2025-07-14T21:50:02.083749618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 21:50:02.083893 containerd[1433]: time="2025-07-14T21:50:02.083764298Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 21:50:02.083893 containerd[1433]: time="2025-07-14T21:50:02.083844698Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 21:50:02.083893 containerd[1433]: time="2025-07-14T21:50:02.083882498Z" level=info msg="metadata content store policy set" policy=shared Jul 14 21:50:02.087079 containerd[1433]: time="2025-07-14T21:50:02.087047018Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 21:50:02.087146 containerd[1433]: time="2025-07-14T21:50:02.087100618Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 21:50:02.087146 containerd[1433]: time="2025-07-14T21:50:02.087118058Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 21:50:02.087146 containerd[1433]: time="2025-07-14T21:50:02.087133738Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 21:50:02.087216 containerd[1433]: time="2025-07-14T21:50:02.087153338Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 21:50:02.087309 containerd[1433]: time="2025-07-14T21:50:02.087290178Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087553218Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087722698Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087740138Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087753418Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087774458Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087787258Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087800538Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087813978Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087828858Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087842578Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087855738Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087869338Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087889898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088324 containerd[1433]: time="2025-07-14T21:50:02.087904058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.087916338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.087929018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.087941858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.087954418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.087966818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.087981458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.087994178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.088008458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.088019698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.088031458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.088044258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.088059578Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.088080578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.088101458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.088648 containerd[1433]: time="2025-07-14T21:50:02.088118938Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 21:50:02.088889 containerd[1433]: time="2025-07-14T21:50:02.088239578Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 21:50:02.088889 containerd[1433]: time="2025-07-14T21:50:02.088256738Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 21:50:02.088889 containerd[1433]: time="2025-07-14T21:50:02.088267258Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 21:50:02.088889 containerd[1433]: time="2025-07-14T21:50:02.088279738Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 21:50:02.088889 containerd[1433]: time="2025-07-14T21:50:02.088289738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.089012 containerd[1433]: time="2025-07-14T21:50:02.088990938Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 21:50:02.089065 containerd[1433]: time="2025-07-14T21:50:02.089051458Z" level=info msg="NRI interface is disabled by configuration." Jul 14 21:50:02.089121 containerd[1433]: time="2025-07-14T21:50:02.089108138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 21:50:02.091207 containerd[1433]: time="2025-07-14T21:50:02.089504418Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 21:50:02.091490 containerd[1433]: time="2025-07-14T21:50:02.091469578Z" level=info msg="Connect containerd service" Jul 14 21:50:02.091536 containerd[1433]: time="2025-07-14T21:50:02.091526418Z" level=info msg="using legacy CRI server" Jul 14 21:50:02.091577 containerd[1433]: time="2025-07-14T21:50:02.091537298Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 21:50:02.091663 containerd[1433]: time="2025-07-14T21:50:02.091643058Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 21:50:02.092302 containerd[1433]: time="2025-07-14T21:50:02.092277978Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 21:50:02.094032 containerd[1433]: time="2025-07-14T21:50:02.092546738Z" level=info msg="Start subscribing containerd event" Jul 14 21:50:02.094032 containerd[1433]: time="2025-07-14T21:50:02.092613378Z" level=info msg="Start recovering state" Jul 14 21:50:02.094032 containerd[1433]: time="2025-07-14T21:50:02.092695058Z" level=info msg="Start event monitor" Jul 14 21:50:02.094032 containerd[1433]: time="2025-07-14T21:50:02.092708618Z" level=info msg="Start snapshots syncer" Jul 14 21:50:02.094032 containerd[1433]: time="2025-07-14T21:50:02.092717538Z" level=info msg="Start cni network conf syncer for default" Jul 14 21:50:02.094032 containerd[1433]: time="2025-07-14T21:50:02.092724938Z" level=info msg="Start streaming server" Jul 14 21:50:02.094032 containerd[1433]: time="2025-07-14T21:50:02.092787458Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 21:50:02.094032 containerd[1433]: time="2025-07-14T21:50:02.092828138Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 21:50:02.094032 containerd[1433]: time="2025-07-14T21:50:02.092883338Z" level=info msg="containerd successfully booted in 0.047585s" Jul 14 21:50:02.092981 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 21:50:02.209080 tar[1424]: linux-arm64/README.md Jul 14 21:50:02.220236 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 21:50:02.320831 sshd_keygen[1417]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 21:50:02.339964 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 21:50:02.348850 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 21:50:02.355373 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 21:50:02.355643 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 21:50:02.358286 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 21:50:02.369215 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 21:50:02.371946 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 21:50:02.374137 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 14 21:50:02.375382 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 21:50:03.050763 systemd-networkd[1364]: eth0: Gained IPv6LL Jul 14 21:50:03.054675 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 21:50:03.056449 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 21:50:03.065837 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 21:50:03.068175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:50:03.070234 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 21:50:03.085953 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 21:50:03.086155 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 21:50:03.089039 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 21:50:03.090688 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 21:50:03.667408 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:50:03.669039 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 21:50:03.670188 systemd[1]: Startup finished in 581ms (kernel) + 4.771s (initrd) + 3.746s (userspace) = 9.098s. Jul 14 21:50:03.672979 (kubelet)[1512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:50:04.116852 kubelet[1512]: E0714 21:50:04.116413 1512 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:50:04.121674 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:50:04.121839 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:50:07.706173 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 21:50:07.707252 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:33570.service - OpenSSH per-connection server daemon (10.0.0.1:33570). Jul 14 21:50:07.753591 sshd[1525]: Accepted publickey for core from 10.0.0.1 port 33570 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:50:07.754746 sshd[1525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:50:07.765674 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 21:50:07.775917 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 21:50:07.777619 systemd-logind[1412]: New session 1 of user core. Jul 14 21:50:07.784267 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 21:50:07.787464 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 21:50:07.793766 (systemd)[1529]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 21:50:07.876927 systemd[1529]: Queued start job for default target default.target. Jul 14 21:50:07.887525 systemd[1529]: Created slice app.slice - User Application Slice. Jul 14 21:50:07.887554 systemd[1529]: Reached target paths.target - Paths. Jul 14 21:50:07.887588 systemd[1529]: Reached target timers.target - Timers. Jul 14 21:50:07.888782 systemd[1529]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 21:50:07.899222 systemd[1529]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 21:50:07.899280 systemd[1529]: Reached target sockets.target - Sockets. Jul 14 21:50:07.899292 systemd[1529]: Reached target basic.target - Basic System. Jul 14 21:50:07.899327 systemd[1529]: Reached target default.target - Main User Target. Jul 14 21:50:07.899353 systemd[1529]: Startup finished in 100ms. Jul 14 21:50:07.899486 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 21:50:07.900830 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 21:50:07.956033 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:33576.service - OpenSSH per-connection server daemon (10.0.0.1:33576). Jul 14 21:50:07.992903 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 33576 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:50:07.994135 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:50:07.999084 systemd-logind[1412]: New session 2 of user core. Jul 14 21:50:08.018738 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 21:50:08.070741 sshd[1540]: pam_unix(sshd:session): session closed for user core Jul 14 21:50:08.091024 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:33576.service: Deactivated successfully. Jul 14 21:50:08.092355 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 21:50:08.094713 systemd-logind[1412]: Session 2 logged out. Waiting for processes to exit. Jul 14 21:50:08.104884 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:33584.service - OpenSSH per-connection server daemon (10.0.0.1:33584). Jul 14 21:50:08.105835 systemd-logind[1412]: Removed session 2. Jul 14 21:50:08.137554 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 33584 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:50:08.138778 sshd[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:50:08.142612 systemd-logind[1412]: New session 3 of user core. Jul 14 21:50:08.157732 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 21:50:08.206470 sshd[1547]: pam_unix(sshd:session): session closed for user core Jul 14 21:50:08.216678 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:33584.service: Deactivated successfully. Jul 14 21:50:08.217924 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 21:50:08.219763 systemd-logind[1412]: Session 3 logged out. Waiting for processes to exit. Jul 14 21:50:08.235033 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:33586.service - OpenSSH per-connection server daemon (10.0.0.1:33586). Jul 14 21:50:08.236250 systemd-logind[1412]: Removed session 3. Jul 14 21:50:08.267417 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 33586 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:50:08.268629 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:50:08.273155 systemd-logind[1412]: New session 4 of user core. Jul 14 21:50:08.282706 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 21:50:08.334017 sshd[1554]: pam_unix(sshd:session): session closed for user core Jul 14 21:50:08.347929 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:33586.service: Deactivated successfully. Jul 14 21:50:08.349307 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 21:50:08.350018 systemd-logind[1412]: Session 4 logged out. Waiting for processes to exit. Jul 14 21:50:08.351763 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:33596.service - OpenSSH per-connection server daemon (10.0.0.1:33596). Jul 14 21:50:08.353618 systemd-logind[1412]: Removed session 4. Jul 14 21:50:08.389806 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 33596 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:50:08.391016 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:50:08.395639 systemd-logind[1412]: New session 5 of user core. Jul 14 21:50:08.402749 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 21:50:08.467026 sudo[1564]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 21:50:08.467321 sudo[1564]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:50:08.488515 sudo[1564]: pam_unix(sudo:session): session closed for user root Jul 14 21:50:08.490269 sshd[1561]: pam_unix(sshd:session): session closed for user core Jul 14 21:50:08.500065 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:33596.service: Deactivated successfully. Jul 14 21:50:08.501506 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 21:50:08.504048 systemd-logind[1412]: Session 5 logged out. Waiting for processes to exit. Jul 14 21:50:08.513916 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:33606.service - OpenSSH per-connection server daemon (10.0.0.1:33606). Jul 14 21:50:08.515040 systemd-logind[1412]: Removed session 5. Jul 14 21:50:08.547277 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 33606 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:50:08.548693 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:50:08.552613 systemd-logind[1412]: New session 6 of user core. Jul 14 21:50:08.559703 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 21:50:08.612358 sudo[1573]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 21:50:08.612673 sudo[1573]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:50:08.615656 sudo[1573]: pam_unix(sudo:session): session closed for user root Jul 14 21:50:08.620284 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 21:50:08.620568 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:50:08.637809 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 21:50:08.639098 auditctl[1576]: No rules Jul 14 21:50:08.640001 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 21:50:08.641611 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 21:50:08.643322 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 21:50:08.667241 augenrules[1594]: No rules Jul 14 21:50:08.668582 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 21:50:08.670025 sudo[1572]: pam_unix(sudo:session): session closed for user root Jul 14 21:50:08.672644 sshd[1569]: pam_unix(sshd:session): session closed for user core Jul 14 21:50:08.690894 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:33606.service: Deactivated successfully. Jul 14 21:50:08.692391 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 21:50:08.693737 systemd-logind[1412]: Session 6 logged out. Waiting for processes to exit. Jul 14 21:50:08.694797 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:33612.service - OpenSSH per-connection server daemon (10.0.0.1:33612). Jul 14 21:50:08.695475 systemd-logind[1412]: Removed session 6. Jul 14 21:50:08.732010 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 33612 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:50:08.733158 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:50:08.737803 systemd-logind[1412]: New session 7 of user core. Jul 14 21:50:08.746697 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 21:50:08.801525 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 21:50:08.802186 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 21:50:09.114829 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 21:50:09.114913 (dockerd)[1625]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 21:50:09.387960 dockerd[1625]: time="2025-07-14T21:50:09.387833778Z" level=info msg="Starting up" Jul 14 21:50:09.536973 dockerd[1625]: time="2025-07-14T21:50:09.536923818Z" level=info msg="Loading containers: start." Jul 14 21:50:09.619709 kernel: Initializing XFRM netlink socket Jul 14 21:50:09.678441 systemd-networkd[1364]: docker0: Link UP Jul 14 21:50:09.696854 dockerd[1625]: time="2025-07-14T21:50:09.696755698Z" level=info msg="Loading containers: done." Jul 14 21:50:09.715062 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3269753872-merged.mount: Deactivated successfully. Jul 14 21:50:09.716359 dockerd[1625]: time="2025-07-14T21:50:09.716315538Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 21:50:09.716699 dockerd[1625]: time="2025-07-14T21:50:09.716647778Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 21:50:09.716803 dockerd[1625]: time="2025-07-14T21:50:09.716771698Z" level=info msg="Daemon has completed initialization" Jul 14 21:50:09.747063 dockerd[1625]: time="2025-07-14T21:50:09.746915618Z" level=info msg="API listen on /run/docker.sock" Jul 14 21:50:09.747339 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 21:50:10.357126 containerd[1433]: time="2025-07-14T21:50:10.356862978Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 14 21:50:11.041179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232936796.mount: Deactivated successfully. Jul 14 21:50:11.925543 containerd[1433]: time="2025-07-14T21:50:11.924877378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:11.925543 containerd[1433]: time="2025-07-14T21:50:11.925511058Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 14 21:50:11.925543 containerd[1433]: time="2025-07-14T21:50:11.926105178Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:11.929406 containerd[1433]: time="2025-07-14T21:50:11.929345578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:11.931715 containerd[1433]: time="2025-07-14T21:50:11.930951538Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.57404232s" Jul 14 21:50:11.931715 containerd[1433]: time="2025-07-14T21:50:11.930998218Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 14 21:50:11.932157 containerd[1433]: time="2025-07-14T21:50:11.932120258Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 14 21:50:12.929166 containerd[1433]: time="2025-07-14T21:50:12.929116018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:12.930378 containerd[1433]: time="2025-07-14T21:50:12.930259978Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 14 21:50:12.931262 containerd[1433]: time="2025-07-14T21:50:12.931204138Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:12.935745 containerd[1433]: time="2025-07-14T21:50:12.934288738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:12.935745 containerd[1433]: time="2025-07-14T21:50:12.935632138Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.0034754s" Jul 14 21:50:12.935745 containerd[1433]: time="2025-07-14T21:50:12.935667698Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 14 21:50:12.936115 containerd[1433]: time="2025-07-14T21:50:12.936090218Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 14 21:50:13.954955 containerd[1433]: time="2025-07-14T21:50:13.954896018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:13.955599 containerd[1433]: time="2025-07-14T21:50:13.955548498Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 14 21:50:13.956392 containerd[1433]: time="2025-07-14T21:50:13.956362138Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:13.959430 containerd[1433]: time="2025-07-14T21:50:13.959376378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:13.960648 containerd[1433]: time="2025-07-14T21:50:13.960618818Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.02449376s" Jul 14 21:50:13.960837 containerd[1433]: time="2025-07-14T21:50:13.960722818Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 14 21:50:13.961423 containerd[1433]: time="2025-07-14T21:50:13.961312818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 14 21:50:14.372069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 21:50:14.386765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:50:14.535874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:50:14.540166 (kubelet)[1844]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 21:50:14.592713 kubelet[1844]: E0714 21:50:14.592644 1844 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 21:50:14.596737 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 21:50:14.596897 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 21:50:14.960791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788070241.mount: Deactivated successfully. Jul 14 21:50:15.338366 containerd[1433]: time="2025-07-14T21:50:15.338225858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:15.339258 containerd[1433]: time="2025-07-14T21:50:15.339218418Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 14 21:50:15.340318 containerd[1433]: time="2025-07-14T21:50:15.340277498Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:15.342791 containerd[1433]: time="2025-07-14T21:50:15.342745578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:15.343639 containerd[1433]: time="2025-07-14T21:50:15.343614898Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.38160356s" Jul 14 21:50:15.343819 containerd[1433]: time="2025-07-14T21:50:15.343717018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 14 21:50:15.344235 containerd[1433]: time="2025-07-14T21:50:15.344212418Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 21:50:15.893749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481431389.mount: Deactivated successfully. Jul 14 21:50:16.661605 containerd[1433]: time="2025-07-14T21:50:16.661520938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:16.662175 containerd[1433]: time="2025-07-14T21:50:16.662144938Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 14 21:50:16.663152 containerd[1433]: time="2025-07-14T21:50:16.663100458Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:16.666213 containerd[1433]: time="2025-07-14T21:50:16.666158378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:16.667806 containerd[1433]: time="2025-07-14T21:50:16.667484058Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.3232372s" Jul 14 21:50:16.667806 containerd[1433]: time="2025-07-14T21:50:16.667518618Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 14 21:50:16.667989 containerd[1433]: time="2025-07-14T21:50:16.667918698Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 21:50:17.171657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount62861324.mount: Deactivated successfully. Jul 14 21:50:17.180893 containerd[1433]: time="2025-07-14T21:50:17.179918978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:17.180893 containerd[1433]: time="2025-07-14T21:50:17.180830138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 14 21:50:17.181306 containerd[1433]: time="2025-07-14T21:50:17.181281338Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:17.183290 containerd[1433]: time="2025-07-14T21:50:17.183254978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:17.184545 containerd[1433]: time="2025-07-14T21:50:17.184513538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 516.56632ms" Jul 14 21:50:17.184676 containerd[1433]: time="2025-07-14T21:50:17.184659578Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 14 21:50:17.185141 containerd[1433]: time="2025-07-14T21:50:17.185119218Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 14 21:50:17.955838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount248443067.mount: Deactivated successfully. Jul 14 21:50:19.383287 containerd[1433]: time="2025-07-14T21:50:19.383130778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:19.429185 containerd[1433]: time="2025-07-14T21:50:19.429096978Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 14 21:50:19.553599 containerd[1433]: time="2025-07-14T21:50:19.553524618Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:19.558630 containerd[1433]: time="2025-07-14T21:50:19.558590298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:19.559475 containerd[1433]: time="2025-07-14T21:50:19.559423938Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.37427332s" Jul 14 21:50:19.559475 containerd[1433]: time="2025-07-14T21:50:19.559468298Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 14 21:50:24.077596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:50:24.088783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:50:24.109992 systemd[1]: Reloading requested from client PID 1999 ('systemctl') (unit session-7.scope)... Jul 14 21:50:24.110008 systemd[1]: Reloading... Jul 14 21:50:24.174595 zram_generator::config[2038]: No configuration found. Jul 14 21:50:24.298549 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:50:24.351534 systemd[1]: Reloading finished in 241 ms. Jul 14 21:50:24.389075 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 21:50:24.389138 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 21:50:24.390602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:50:24.392895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:50:24.496702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:50:24.501134 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:50:24.537946 kubelet[2084]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:50:24.537946 kubelet[2084]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:50:24.537946 kubelet[2084]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:50:24.538282 kubelet[2084]: I0714 21:50:24.538018 2084 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:50:25.256137 kubelet[2084]: I0714 21:50:25.256086 2084 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 21:50:25.256137 kubelet[2084]: I0714 21:50:25.256125 2084 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:50:25.256445 kubelet[2084]: I0714 21:50:25.256412 2084 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 21:50:25.293645 kubelet[2084]: E0714 21:50:25.293594 2084 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:50:25.295445 kubelet[2084]: I0714 21:50:25.295407 2084 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:50:25.302527 kubelet[2084]: E0714 21:50:25.302486 2084 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:50:25.302527 kubelet[2084]: I0714 21:50:25.302519 2084 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:50:25.306103 kubelet[2084]: I0714 21:50:25.305817 2084 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:50:25.307080 kubelet[2084]: I0714 21:50:25.307029 2084 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:50:25.307353 kubelet[2084]: I0714 21:50:25.307162 2084 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:50:25.307592 kubelet[2084]: I0714 21:50:25.307576 2084 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:50:25.307657 kubelet[2084]: I0714 21:50:25.307648 2084 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 21:50:25.308052 kubelet[2084]: I0714 21:50:25.307875 2084 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:50:25.310282 kubelet[2084]: I0714 21:50:25.310257 2084 kubelet.go:446] "Attempting to sync node with API server" Jul 14 21:50:25.310393 kubelet[2084]: I0714 21:50:25.310382 2084 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:50:25.310480 kubelet[2084]: I0714 21:50:25.310465 2084 kubelet.go:352] "Adding apiserver pod source" Jul 14 21:50:25.310544 kubelet[2084]: I0714 21:50:25.310535 2084 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:50:25.314617 kubelet[2084]: W0714 21:50:25.313916 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jul 14 21:50:25.314617 kubelet[2084]: E0714 21:50:25.313987 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:50:25.314927 kubelet[2084]: I0714 21:50:25.314892 2084 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 21:50:25.315520 kubelet[2084]: I0714 21:50:25.315495 2084 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:50:25.315768 kubelet[2084]: W0714 21:50:25.315670 2084 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 21:50:25.315768 kubelet[2084]: W0714 21:50:25.315664 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jul 14 21:50:25.315768 kubelet[2084]: E0714 21:50:25.315732 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:50:25.316592 kubelet[2084]: I0714 21:50:25.316574 2084 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:50:25.316639 kubelet[2084]: I0714 21:50:25.316614 2084 server.go:1287] "Started kubelet" Jul 14 21:50:25.317190 kubelet[2084]: I0714 21:50:25.316694 2084 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:50:25.321525 kubelet[2084]: I0714 21:50:25.320741 2084 server.go:479] "Adding debug handlers to kubelet server" Jul 14 21:50:25.321525 kubelet[2084]: I0714 21:50:25.320806 2084 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:50:25.321525 kubelet[2084]: I0714 21:50:25.321098 2084 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:50:25.321525 kubelet[2084]: E0714 21:50:25.320890 2084 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523c974045d712 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 21:50:25.316591378 +0000 UTC m=+0.812590641,LastTimestamp:2025-07-14 21:50:25.316591378 +0000 UTC m=+0.812590641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 21:50:25.321525 kubelet[2084]: I0714 21:50:25.321359 2084 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:50:25.321830 kubelet[2084]: I0714 21:50:25.321786 2084 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:50:25.323247 kubelet[2084]: E0714 21:50:25.323207 2084 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:50:25.323687 kubelet[2084]: E0714 21:50:25.323452 2084 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:50:25.323687 kubelet[2084]: I0714 21:50:25.323578 2084 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:50:25.323782 kubelet[2084]: I0714 21:50:25.323729 2084 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:50:25.323910 kubelet[2084]: I0714 21:50:25.323883 2084 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:50:25.324159 kubelet[2084]: I0714 21:50:25.324124 2084 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:50:25.324243 kubelet[2084]: I0714 21:50:25.324205 2084 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:50:25.324322 kubelet[2084]: E0714 21:50:25.324275 2084 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" Jul 14 21:50:25.324900 kubelet[2084]: W0714 21:50:25.324815 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jul 14 21:50:25.324900 kubelet[2084]: E0714 21:50:25.324899 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:50:25.325176 kubelet[2084]: I0714 21:50:25.325153 2084 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:50:25.337731 kubelet[2084]: I0714 21:50:25.337697 2084 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:50:25.337731 kubelet[2084]: I0714 21:50:25.337717 2084 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:50:25.337731 kubelet[2084]: I0714 21:50:25.337738 2084 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:50:25.341813 kubelet[2084]: I0714 21:50:25.341770 2084 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:50:25.342834 kubelet[2084]: I0714 21:50:25.342808 2084 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:50:25.342834 kubelet[2084]: I0714 21:50:25.342833 2084 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 21:50:25.342912 kubelet[2084]: I0714 21:50:25.342852 2084 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:50:25.342912 kubelet[2084]: I0714 21:50:25.342860 2084 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 21:50:25.342912 kubelet[2084]: E0714 21:50:25.342901 2084 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:50:25.343843 kubelet[2084]: W0714 21:50:25.343802 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jul 14 21:50:25.343913 kubelet[2084]: E0714 21:50:25.343858 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:50:25.423396 kubelet[2084]: E0714 21:50:25.423339 2084 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:50:25.443616 kubelet[2084]: I0714 21:50:25.443545 2084 policy_none.go:49] "None policy: Start" Jul 14 21:50:25.443616 kubelet[2084]: I0714 21:50:25.443589 2084 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:50:25.443616 kubelet[2084]: I0714 21:50:25.443603 2084 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:50:25.443764 kubelet[2084]: E0714 21:50:25.443627 2084 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 21:50:25.449907 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 21:50:25.462506 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 21:50:25.465299 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 21:50:25.472804 kubelet[2084]: I0714 21:50:25.472280 2084 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:50:25.472804 kubelet[2084]: I0714 21:50:25.472495 2084 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:50:25.472804 kubelet[2084]: I0714 21:50:25.472507 2084 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:50:25.472804 kubelet[2084]: I0714 21:50:25.472744 2084 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:50:25.473697 kubelet[2084]: E0714 21:50:25.473666 2084 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:50:25.473854 kubelet[2084]: E0714 21:50:25.473714 2084 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 21:50:25.525620 kubelet[2084]: E0714 21:50:25.524829 2084 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" Jul 14 21:50:25.574402 kubelet[2084]: I0714 21:50:25.574358 2084 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:50:25.574897 kubelet[2084]: E0714 21:50:25.574855 2084 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jul 14 21:50:25.651413 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 14 21:50:25.661309 kubelet[2084]: E0714 21:50:25.661270 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:50:25.663991 systemd[1]: Created slice kubepods-burstable-pod315974e79ed4bf8459c877fea1cf440c.slice - libcontainer container kubepods-burstable-pod315974e79ed4bf8459c877fea1cf440c.slice. Jul 14 21:50:25.676135 kubelet[2084]: E0714 21:50:25.675941 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:50:25.679107 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 14 21:50:25.680717 kubelet[2084]: E0714 21:50:25.680515 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:50:25.725850 kubelet[2084]: I0714 21:50:25.725816 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:25.725850 kubelet[2084]: I0714 21:50:25.725854 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:25.725945 kubelet[2084]: I0714 21:50:25.725872 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:25.725945 kubelet[2084]: I0714 21:50:25.725889 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:50:25.725945 kubelet[2084]: I0714 21:50:25.725909 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/315974e79ed4bf8459c877fea1cf440c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"315974e79ed4bf8459c877fea1cf440c\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:50:25.725945 kubelet[2084]: I0714 21:50:25.725924 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/315974e79ed4bf8459c877fea1cf440c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"315974e79ed4bf8459c877fea1cf440c\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:50:25.725945 kubelet[2084]: I0714 21:50:25.725939 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/315974e79ed4bf8459c877fea1cf440c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"315974e79ed4bf8459c877fea1cf440c\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:50:25.726072 kubelet[2084]: I0714 21:50:25.725955 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:25.726072 kubelet[2084]: I0714 21:50:25.725972 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:25.776937 kubelet[2084]: I0714 21:50:25.776846 2084 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:50:25.777319 kubelet[2084]: E0714 21:50:25.777289 2084 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jul 14 21:50:25.925810 kubelet[2084]: E0714 21:50:25.925767 2084 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" Jul 14 21:50:25.962074 kubelet[2084]: E0714 21:50:25.962044 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:25.962753 containerd[1433]: time="2025-07-14T21:50:25.962668258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 14 21:50:25.977222 kubelet[2084]: E0714 21:50:25.977181 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:25.977973 containerd[1433]: time="2025-07-14T21:50:25.977594298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:315974e79ed4bf8459c877fea1cf440c,Namespace:kube-system,Attempt:0,}" Jul 14 21:50:25.981187 kubelet[2084]: E0714 21:50:25.981150 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:25.981483 containerd[1433]: time="2025-07-14T21:50:25.981451778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 14 21:50:26.146026 kubelet[2084]: W0714 21:50:26.145863 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jul 14 21:50:26.146026 kubelet[2084]: E0714 21:50:26.145935 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:50:26.178641 kubelet[2084]: I0714 21:50:26.178390 2084 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:50:26.178765 kubelet[2084]: E0714 21:50:26.178732 2084 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jul 14 21:50:26.464233 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702983755.mount: Deactivated successfully. Jul 14 21:50:26.473427 containerd[1433]: time="2025-07-14T21:50:26.473380298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:50:26.475848 containerd[1433]: time="2025-07-14T21:50:26.475740858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:50:26.476589 containerd[1433]: time="2025-07-14T21:50:26.476531818Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:50:26.477724 containerd[1433]: time="2025-07-14T21:50:26.477647378Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:50:26.478621 containerd[1433]: time="2025-07-14T21:50:26.478598778Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 14 21:50:26.479160 containerd[1433]: time="2025-07-14T21:50:26.479132738Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:50:26.479791 containerd[1433]: time="2025-07-14T21:50:26.479694578Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 21:50:26.483915 containerd[1433]: time="2025-07-14T21:50:26.483839378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 21:50:26.485579 containerd[1433]: time="2025-07-14T21:50:26.484552458Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 506.8786ms" Jul 14 21:50:26.487692 containerd[1433]: time="2025-07-14T21:50:26.487661778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 506.14944ms" Jul 14 21:50:26.490666 containerd[1433]: time="2025-07-14T21:50:26.490626818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 527.81628ms" Jul 14 21:50:26.616164 containerd[1433]: time="2025-07-14T21:50:26.616076098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:50:26.616456 containerd[1433]: time="2025-07-14T21:50:26.616385178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:50:26.616492 containerd[1433]: time="2025-07-14T21:50:26.616425658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:50:26.616492 containerd[1433]: time="2025-07-14T21:50:26.616465578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:26.616836 containerd[1433]: time="2025-07-14T21:50:26.616624298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:26.617540 containerd[1433]: time="2025-07-14T21:50:26.617477098Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:50:26.617540 containerd[1433]: time="2025-07-14T21:50:26.616140818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:50:26.617651 containerd[1433]: time="2025-07-14T21:50:26.617536538Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:26.617651 containerd[1433]: time="2025-07-14T21:50:26.617622298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:26.617651 containerd[1433]: time="2025-07-14T21:50:26.617532058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:50:26.617651 containerd[1433]: time="2025-07-14T21:50:26.617550618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:26.618127 containerd[1433]: time="2025-07-14T21:50:26.618046458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:26.640759 systemd[1]: Started cri-containerd-baa07fe2fb8bea9a8d19b609404818150df55021f331e746aade5b49af0b94b0.scope - libcontainer container baa07fe2fb8bea9a8d19b609404818150df55021f331e746aade5b49af0b94b0. Jul 14 21:50:26.644492 systemd[1]: Started cri-containerd-6457e4680f6c38b70d81e72f869c7b25942a093badc3330ff352141a59a363c4.scope - libcontainer container 6457e4680f6c38b70d81e72f869c7b25942a093badc3330ff352141a59a363c4. Jul 14 21:50:26.645936 systemd[1]: Started cri-containerd-dbd7800622d212e94a144e1a2c54a27e1baf1ba28beb523fea9b213ef55ac858.scope - libcontainer container dbd7800622d212e94a144e1a2c54a27e1baf1ba28beb523fea9b213ef55ac858. Jul 14 21:50:26.684971 containerd[1433]: time="2025-07-14T21:50:26.684921778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:315974e79ed4bf8459c877fea1cf440c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbd7800622d212e94a144e1a2c54a27e1baf1ba28beb523fea9b213ef55ac858\"" Jul 14 21:50:26.685102 containerd[1433]: time="2025-07-14T21:50:26.685072018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6457e4680f6c38b70d81e72f869c7b25942a093badc3330ff352141a59a363c4\"" Jul 14 21:50:26.685725 containerd[1433]: time="2025-07-14T21:50:26.685509378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"baa07fe2fb8bea9a8d19b609404818150df55021f331e746aade5b49af0b94b0\"" Jul 14 21:50:26.687108 kubelet[2084]: E0714 21:50:26.686902 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:26.687108 kubelet[2084]: E0714 21:50:26.686982 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:26.687454 kubelet[2084]: E0714 21:50:26.687114 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:26.689749 containerd[1433]: time="2025-07-14T21:50:26.689717538Z" level=info msg="CreateContainer within sandbox \"baa07fe2fb8bea9a8d19b609404818150df55021f331e746aade5b49af0b94b0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 21:50:26.689885 containerd[1433]: time="2025-07-14T21:50:26.689852098Z" level=info msg="CreateContainer within sandbox \"6457e4680f6c38b70d81e72f869c7b25942a093badc3330ff352141a59a363c4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 21:50:26.690577 kubelet[2084]: W0714 21:50:26.690502 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jul 14 21:50:26.690632 kubelet[2084]: E0714 21:50:26.690587 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:50:26.690920 containerd[1433]: time="2025-07-14T21:50:26.690772098Z" level=info msg="CreateContainer within sandbox \"dbd7800622d212e94a144e1a2c54a27e1baf1ba28beb523fea9b213ef55ac858\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 21:50:26.717015 containerd[1433]: time="2025-07-14T21:50:26.716901858Z" level=info msg="CreateContainer within sandbox \"baa07fe2fb8bea9a8d19b609404818150df55021f331e746aade5b49af0b94b0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cbe515b1435d601ed30afb04fd2370e47d479de86a4a2bb3d076da7ad827e372\"" Jul 14 21:50:26.718255 containerd[1433]: time="2025-07-14T21:50:26.718042178Z" level=info msg="StartContainer for \"cbe515b1435d601ed30afb04fd2370e47d479de86a4a2bb3d076da7ad827e372\"" Jul 14 21:50:26.723181 containerd[1433]: time="2025-07-14T21:50:26.723108218Z" level=info msg="CreateContainer within sandbox \"dbd7800622d212e94a144e1a2c54a27e1baf1ba28beb523fea9b213ef55ac858\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"497474c6959092c08a7c0f9606578c0cfab4a7108e143c5f3904f52f79c659a9\"" Jul 14 21:50:26.723650 containerd[1433]: time="2025-07-14T21:50:26.723623338Z" level=info msg="StartContainer for \"497474c6959092c08a7c0f9606578c0cfab4a7108e143c5f3904f52f79c659a9\"" Jul 14 21:50:26.726371 kubelet[2084]: E0714 21:50:26.726340 2084 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="1.6s" Jul 14 21:50:26.728327 containerd[1433]: time="2025-07-14T21:50:26.728281018Z" level=info msg="CreateContainer within sandbox \"6457e4680f6c38b70d81e72f869c7b25942a093badc3330ff352141a59a363c4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e6abbbeea83dcfb29cfdfbda5bffecf5e029e1be5375000735beefcd4797819e\"" Jul 14 21:50:26.728852 containerd[1433]: time="2025-07-14T21:50:26.728823418Z" level=info msg="StartContainer for \"e6abbbeea83dcfb29cfdfbda5bffecf5e029e1be5375000735beefcd4797819e\"" Jul 14 21:50:26.744452 kubelet[2084]: W0714 21:50:26.744285 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jul 14 21:50:26.744452 kubelet[2084]: E0714 21:50:26.744368 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:50:26.746953 systemd[1]: Started cri-containerd-cbe515b1435d601ed30afb04fd2370e47d479de86a4a2bb3d076da7ad827e372.scope - libcontainer container cbe515b1435d601ed30afb04fd2370e47d479de86a4a2bb3d076da7ad827e372. Jul 14 21:50:26.750150 systemd[1]: Started cri-containerd-497474c6959092c08a7c0f9606578c0cfab4a7108e143c5f3904f52f79c659a9.scope - libcontainer container 497474c6959092c08a7c0f9606578c0cfab4a7108e143c5f3904f52f79c659a9. Jul 14 21:50:26.759739 systemd[1]: Started cri-containerd-e6abbbeea83dcfb29cfdfbda5bffecf5e029e1be5375000735beefcd4797819e.scope - libcontainer container e6abbbeea83dcfb29cfdfbda5bffecf5e029e1be5375000735beefcd4797819e. Jul 14 21:50:26.789992 containerd[1433]: time="2025-07-14T21:50:26.789952818Z" level=info msg="StartContainer for \"cbe515b1435d601ed30afb04fd2370e47d479de86a4a2bb3d076da7ad827e372\" returns successfully" Jul 14 21:50:26.811530 containerd[1433]: time="2025-07-14T21:50:26.811407338Z" level=info msg="StartContainer for \"497474c6959092c08a7c0f9606578c0cfab4a7108e143c5f3904f52f79c659a9\" returns successfully" Jul 14 21:50:26.834457 containerd[1433]: time="2025-07-14T21:50:26.834169218Z" level=info msg="StartContainer for \"e6abbbeea83dcfb29cfdfbda5bffecf5e029e1be5375000735beefcd4797819e\" returns successfully" Jul 14 21:50:26.907280 kubelet[2084]: W0714 21:50:26.907208 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.52:6443: connect: connection refused Jul 14 21:50:26.907280 kubelet[2084]: E0714 21:50:26.907282 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" Jul 14 21:50:26.980523 kubelet[2084]: I0714 21:50:26.980111 2084 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:50:26.980523 kubelet[2084]: E0714 21:50:26.980412 2084 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Jul 14 21:50:27.352301 kubelet[2084]: E0714 21:50:27.352207 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:50:27.352388 kubelet[2084]: E0714 21:50:27.352336 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:27.353912 kubelet[2084]: E0714 21:50:27.353886 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:50:27.354010 kubelet[2084]: E0714 21:50:27.353993 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:27.355806 kubelet[2084]: E0714 21:50:27.355785 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:50:27.355897 kubelet[2084]: E0714 21:50:27.355880 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:28.331051 kubelet[2084]: E0714 21:50:28.331013 2084 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 14 21:50:28.358579 kubelet[2084]: E0714 21:50:28.358104 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:50:28.358854 kubelet[2084]: E0714 21:50:28.358834 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 21:50:28.358948 kubelet[2084]: E0714 21:50:28.358933 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:28.360210 kubelet[2084]: E0714 21:50:28.360193 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:28.582535 kubelet[2084]: I0714 21:50:28.582387 2084 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:50:28.592535 kubelet[2084]: I0714 21:50:28.592493 2084 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 21:50:28.592535 kubelet[2084]: E0714 21:50:28.592532 2084 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 21:50:28.601038 kubelet[2084]: E0714 21:50:28.601011 2084 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:50:28.724774 kubelet[2084]: I0714 21:50:28.724724 2084 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:50:28.735266 kubelet[2084]: E0714 21:50:28.735217 2084 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 14 21:50:28.735266 kubelet[2084]: I0714 21:50:28.735253 2084 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:28.737155 kubelet[2084]: E0714 21:50:28.737114 2084 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:28.737155 kubelet[2084]: I0714 21:50:28.737143 2084 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:50:28.739571 kubelet[2084]: E0714 21:50:28.738703 2084 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 14 21:50:29.314209 kubelet[2084]: I0714 21:50:29.314117 2084 apiserver.go:52] "Watching apiserver" Jul 14 21:50:29.324818 kubelet[2084]: I0714 21:50:29.324782 2084 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:50:29.358762 kubelet[2084]: I0714 21:50:29.358274 2084 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:50:29.365646 kubelet[2084]: E0714 21:50:29.365615 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:30.187760 kubelet[2084]: I0714 21:50:30.186905 2084 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:30.196439 kubelet[2084]: E0714 21:50:30.196394 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:30.359980 kubelet[2084]: E0714 21:50:30.359939 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:30.360303 kubelet[2084]: E0714 21:50:30.360029 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:30.485983 systemd[1]: Reloading requested from client PID 2361 ('systemctl') (unit session-7.scope)... Jul 14 21:50:30.485998 systemd[1]: Reloading... Jul 14 21:50:30.552607 zram_generator::config[2400]: No configuration found. Jul 14 21:50:30.638670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 21:50:30.706249 systemd[1]: Reloading finished in 219 ms. Jul 14 21:50:30.743680 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:50:30.754844 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 21:50:30.755309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:50:30.755709 systemd[1]: kubelet.service: Consumed 1.183s CPU time, 132.3M memory peak, 0B memory swap peak. Jul 14 21:50:30.765791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 21:50:30.864455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 21:50:30.874190 (kubelet)[2442]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 21:50:30.920728 kubelet[2442]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:50:30.920728 kubelet[2442]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 21:50:30.920728 kubelet[2442]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 21:50:30.921055 kubelet[2442]: I0714 21:50:30.920781 2442 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 21:50:30.927165 kubelet[2442]: I0714 21:50:30.926976 2442 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 14 21:50:30.927165 kubelet[2442]: I0714 21:50:30.927002 2442 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 21:50:30.927279 kubelet[2442]: I0714 21:50:30.927241 2442 server.go:954] "Client rotation is on, will bootstrap in background" Jul 14 21:50:30.928434 kubelet[2442]: I0714 21:50:30.928406 2442 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 21:50:30.930569 kubelet[2442]: I0714 21:50:30.930538 2442 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 21:50:30.933895 kubelet[2442]: E0714 21:50:30.933823 2442 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 21:50:30.933895 kubelet[2442]: I0714 21:50:30.933851 2442 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 21:50:30.937542 kubelet[2442]: I0714 21:50:30.936478 2442 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 21:50:30.937542 kubelet[2442]: I0714 21:50:30.936728 2442 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 21:50:30.937542 kubelet[2442]: I0714 21:50:30.936752 2442 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 21:50:30.937542 kubelet[2442]: I0714 21:50:30.937038 2442 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 21:50:30.937753 kubelet[2442]: I0714 21:50:30.937048 2442 container_manager_linux.go:304] "Creating device plugin manager" Jul 14 21:50:30.937753 kubelet[2442]: I0714 21:50:30.937097 2442 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:50:30.937753 kubelet[2442]: I0714 21:50:30.937218 2442 kubelet.go:446] "Attempting to sync node with API server" Jul 14 21:50:30.937753 kubelet[2442]: I0714 21:50:30.937230 2442 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 21:50:30.937753 kubelet[2442]: I0714 21:50:30.937249 2442 kubelet.go:352] "Adding apiserver pod source" Jul 14 21:50:30.937753 kubelet[2442]: I0714 21:50:30.937264 2442 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 21:50:30.940972 kubelet[2442]: I0714 21:50:30.940948 2442 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 21:50:30.942172 kubelet[2442]: I0714 21:50:30.942152 2442 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 21:50:30.943645 kubelet[2442]: I0714 21:50:30.943619 2442 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 21:50:30.943710 kubelet[2442]: I0714 21:50:30.943677 2442 server.go:1287] "Started kubelet" Jul 14 21:50:30.944570 kubelet[2442]: I0714 21:50:30.944259 2442 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 21:50:30.945898 kubelet[2442]: I0714 21:50:30.944411 2442 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 21:50:30.945898 kubelet[2442]: I0714 21:50:30.945799 2442 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 21:50:30.945898 kubelet[2442]: I0714 21:50:30.945866 2442 server.go:479] "Adding debug handlers to kubelet server" Jul 14 21:50:30.949567 kubelet[2442]: I0714 21:50:30.946960 2442 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 21:50:30.949567 kubelet[2442]: I0714 21:50:30.947663 2442 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 21:50:30.949567 kubelet[2442]: E0714 21:50:30.948838 2442 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 21:50:30.949567 kubelet[2442]: I0714 21:50:30.948864 2442 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 21:50:30.949567 kubelet[2442]: I0714 21:50:30.949015 2442 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 21:50:30.949567 kubelet[2442]: I0714 21:50:30.949132 2442 reconciler.go:26] "Reconciler: start to sync state" Jul 14 21:50:30.953310 kubelet[2442]: I0714 21:50:30.953263 2442 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 21:50:30.954486 kubelet[2442]: E0714 21:50:30.954395 2442 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 21:50:30.957313 kubelet[2442]: I0714 21:50:30.954623 2442 factory.go:221] Registration of the containerd container factory successfully Jul 14 21:50:30.957313 kubelet[2442]: I0714 21:50:30.954635 2442 factory.go:221] Registration of the systemd container factory successfully Jul 14 21:50:30.984982 kubelet[2442]: I0714 21:50:30.984875 2442 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 21:50:30.987465 kubelet[2442]: I0714 21:50:30.986663 2442 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 21:50:30.987465 kubelet[2442]: I0714 21:50:30.986689 2442 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 14 21:50:30.987465 kubelet[2442]: I0714 21:50:30.986708 2442 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 21:50:30.987465 kubelet[2442]: I0714 21:50:30.986714 2442 kubelet.go:2382] "Starting kubelet main sync loop" Jul 14 21:50:30.987465 kubelet[2442]: E0714 21:50:30.986753 2442 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 21:50:31.011135 kubelet[2442]: I0714 21:50:31.011045 2442 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 21:50:31.011135 kubelet[2442]: I0714 21:50:31.011065 2442 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 21:50:31.011135 kubelet[2442]: I0714 21:50:31.011084 2442 state_mem.go:36] "Initialized new in-memory state store" Jul 14 21:50:31.012547 kubelet[2442]: I0714 21:50:31.012376 2442 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 21:50:31.012547 kubelet[2442]: I0714 21:50:31.012401 2442 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 21:50:31.012547 kubelet[2442]: I0714 21:50:31.012429 2442 policy_none.go:49] "None policy: Start" Jul 14 21:50:31.012547 kubelet[2442]: I0714 21:50:31.012439 2442 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 21:50:31.012547 kubelet[2442]: I0714 21:50:31.012449 2442 state_mem.go:35] "Initializing new in-memory state store" Jul 14 21:50:31.012725 kubelet[2442]: I0714 21:50:31.012582 2442 state_mem.go:75] "Updated machine memory state" Jul 14 21:50:31.015967 kubelet[2442]: I0714 21:50:31.015948 2442 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 21:50:31.016115 kubelet[2442]: I0714 21:50:31.016100 2442 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 21:50:31.016149 kubelet[2442]: I0714 21:50:31.016115 2442 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 21:50:31.016304 kubelet[2442]: I0714 21:50:31.016289 2442 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 21:50:31.017237 kubelet[2442]: E0714 21:50:31.017178 2442 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 21:50:31.088771 kubelet[2442]: I0714 21:50:31.088261 2442 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 21:50:31.088771 kubelet[2442]: I0714 21:50:31.088404 2442 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:50:31.088771 kubelet[2442]: I0714 21:50:31.088649 2442 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:31.093901 kubelet[2442]: E0714 21:50:31.093871 2442 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:31.094211 kubelet[2442]: E0714 21:50:31.094158 2442 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 21:50:31.119300 kubelet[2442]: I0714 21:50:31.119232 2442 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 21:50:31.125773 kubelet[2442]: I0714 21:50:31.125749 2442 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 21:50:31.125830 kubelet[2442]: I0714 21:50:31.125817 2442 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 21:50:31.250851 kubelet[2442]: I0714 21:50:31.250793 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:31.250851 kubelet[2442]: I0714 21:50:31.250843 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:31.251051 kubelet[2442]: I0714 21:50:31.250877 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/315974e79ed4bf8459c877fea1cf440c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"315974e79ed4bf8459c877fea1cf440c\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:50:31.251051 kubelet[2442]: I0714 21:50:31.250911 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/315974e79ed4bf8459c877fea1cf440c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"315974e79ed4bf8459c877fea1cf440c\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:50:31.251051 kubelet[2442]: I0714 21:50:31.250941 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:31.251051 kubelet[2442]: I0714 21:50:31.250962 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:31.251051 kubelet[2442]: I0714 21:50:31.250978 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 21:50:31.251210 kubelet[2442]: I0714 21:50:31.250994 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 14 21:50:31.251210 kubelet[2442]: I0714 21:50:31.251012 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/315974e79ed4bf8459c877fea1cf440c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"315974e79ed4bf8459c877fea1cf440c\") " pod="kube-system/kube-apiserver-localhost" Jul 14 21:50:31.396917 kubelet[2442]: E0714 21:50:31.396775 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:31.398257 kubelet[2442]: E0714 21:50:31.398178 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:31.398257 kubelet[2442]: E0714 21:50:31.398212 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:31.938400 kubelet[2442]: I0714 21:50:31.938318 2442 apiserver.go:52] "Watching apiserver" Jul 14 21:50:31.949155 kubelet[2442]: I0714 21:50:31.949112 2442 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 21:50:32.000598 kubelet[2442]: I0714 21:50:32.000380 2442 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 21:50:32.000598 kubelet[2442]: E0714 21:50:32.000489 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:32.000754 kubelet[2442]: E0714 21:50:32.000741 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:32.005406 kubelet[2442]: E0714 21:50:32.005356 2442 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 21:50:32.005517 kubelet[2442]: E0714 21:50:32.005495 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:32.012065 kubelet[2442]: I0714 21:50:32.012002 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.011983778 podStartE2EDuration="1.011983778s" podCreationTimestamp="2025-07-14 21:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:50:32.002660938 +0000 UTC m=+1.124563761" watchObservedRunningTime="2025-07-14 21:50:32.011983778 +0000 UTC m=+1.133886601" Jul 14 21:50:32.020395 kubelet[2442]: I0714 21:50:32.019697 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.019683778 podStartE2EDuration="3.019683778s" podCreationTimestamp="2025-07-14 21:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:50:32.012386778 +0000 UTC m=+1.134289561" watchObservedRunningTime="2025-07-14 21:50:32.019683778 +0000 UTC m=+1.141586601" Jul 14 21:50:32.031477 kubelet[2442]: I0714 21:50:32.031335 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.031324698 podStartE2EDuration="2.031324698s" podCreationTimestamp="2025-07-14 21:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:50:32.020017498 +0000 UTC m=+1.141920281" watchObservedRunningTime="2025-07-14 21:50:32.031324698 +0000 UTC m=+1.153227521" Jul 14 21:50:33.002310 kubelet[2442]: E0714 21:50:33.002274 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:33.002724 kubelet[2442]: E0714 21:50:33.002393 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:33.809903 kubelet[2442]: E0714 21:50:33.809869 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:34.003460 kubelet[2442]: E0714 21:50:34.003429 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:34.787658 kubelet[2442]: E0714 21:50:34.787617 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:36.144815 kubelet[2442]: I0714 21:50:36.144649 2442 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 21:50:36.145656 containerd[1433]: time="2025-07-14T21:50:36.145446185Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 21:50:36.145969 kubelet[2442]: I0714 21:50:36.145629 2442 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 21:50:36.917937 systemd[1]: Created slice kubepods-besteffort-podce06a26c_6335_4735_bb4e_bb514eb934d5.slice - libcontainer container kubepods-besteffort-podce06a26c_6335_4735_bb4e_bb514eb934d5.slice. Jul 14 21:50:36.992141 kubelet[2442]: I0714 21:50:36.992103 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce06a26c-6335-4735-bb4e-bb514eb934d5-xtables-lock\") pod \"kube-proxy-kxhz9\" (UID: \"ce06a26c-6335-4735-bb4e-bb514eb934d5\") " pod="kube-system/kube-proxy-kxhz9" Jul 14 21:50:36.992141 kubelet[2442]: I0714 21:50:36.992144 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9xnd\" (UniqueName: \"kubernetes.io/projected/ce06a26c-6335-4735-bb4e-bb514eb934d5-kube-api-access-p9xnd\") pod \"kube-proxy-kxhz9\" (UID: \"ce06a26c-6335-4735-bb4e-bb514eb934d5\") " pod="kube-system/kube-proxy-kxhz9" Jul 14 21:50:36.992531 kubelet[2442]: I0714 21:50:36.992165 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce06a26c-6335-4735-bb4e-bb514eb934d5-kube-proxy\") pod \"kube-proxy-kxhz9\" (UID: \"ce06a26c-6335-4735-bb4e-bb514eb934d5\") " pod="kube-system/kube-proxy-kxhz9" Jul 14 21:50:36.992531 kubelet[2442]: I0714 21:50:36.992180 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce06a26c-6335-4735-bb4e-bb514eb934d5-lib-modules\") pod \"kube-proxy-kxhz9\" (UID: \"ce06a26c-6335-4735-bb4e-bb514eb934d5\") " pod="kube-system/kube-proxy-kxhz9" Jul 14 21:50:37.204522 systemd[1]: Created slice kubepods-besteffort-podf43cc9e3_b242_420d_9bf7_391730f9adf0.slice - libcontainer container kubepods-besteffort-podf43cc9e3_b242_420d_9bf7_391730f9adf0.slice. Jul 14 21:50:37.230483 kubelet[2442]: E0714 21:50:37.230440 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:37.231137 containerd[1433]: time="2025-07-14T21:50:37.231103890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kxhz9,Uid:ce06a26c-6335-4735-bb4e-bb514eb934d5,Namespace:kube-system,Attempt:0,}" Jul 14 21:50:37.263718 containerd[1433]: time="2025-07-14T21:50:37.263631146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:50:37.263718 containerd[1433]: time="2025-07-14T21:50:37.263674946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:50:37.263718 containerd[1433]: time="2025-07-14T21:50:37.263690426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:37.263870 containerd[1433]: time="2025-07-14T21:50:37.263754226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:37.283729 systemd[1]: Started cri-containerd-28a121292b9db43a4868bf64bd5cd8500b50c2422e09db1b625fd74e5d6d24ad.scope - libcontainer container 28a121292b9db43a4868bf64bd5cd8500b50c2422e09db1b625fd74e5d6d24ad. Jul 14 21:50:37.293330 kubelet[2442]: I0714 21:50:37.293291 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f43cc9e3-b242-420d-9bf7-391730f9adf0-var-lib-calico\") pod \"tigera-operator-747864d56d-8k2xc\" (UID: \"f43cc9e3-b242-420d-9bf7-391730f9adf0\") " pod="tigera-operator/tigera-operator-747864d56d-8k2xc" Jul 14 21:50:37.293423 kubelet[2442]: I0714 21:50:37.293333 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cffrz\" (UniqueName: \"kubernetes.io/projected/f43cc9e3-b242-420d-9bf7-391730f9adf0-kube-api-access-cffrz\") pod \"tigera-operator-747864d56d-8k2xc\" (UID: \"f43cc9e3-b242-420d-9bf7-391730f9adf0\") " pod="tigera-operator/tigera-operator-747864d56d-8k2xc" Jul 14 21:50:37.301106 containerd[1433]: time="2025-07-14T21:50:37.301065622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kxhz9,Uid:ce06a26c-6335-4735-bb4e-bb514eb934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"28a121292b9db43a4868bf64bd5cd8500b50c2422e09db1b625fd74e5d6d24ad\"" Jul 14 21:50:37.301852 kubelet[2442]: E0714 21:50:37.301831 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:37.303833 containerd[1433]: time="2025-07-14T21:50:37.303799234Z" level=info msg="CreateContainer within sandbox \"28a121292b9db43a4868bf64bd5cd8500b50c2422e09db1b625fd74e5d6d24ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 21:50:37.317726 containerd[1433]: time="2025-07-14T21:50:37.317682732Z" level=info msg="CreateContainer within sandbox \"28a121292b9db43a4868bf64bd5cd8500b50c2422e09db1b625fd74e5d6d24ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"39698061410aa2f37b144363b3758e7f3d60329044d10440960e1cdc6120cafb\"" Jul 14 21:50:37.319463 containerd[1433]: time="2025-07-14T21:50:37.318196934Z" level=info msg="StartContainer for \"39698061410aa2f37b144363b3758e7f3d60329044d10440960e1cdc6120cafb\"" Jul 14 21:50:37.342728 systemd[1]: Started cri-containerd-39698061410aa2f37b144363b3758e7f3d60329044d10440960e1cdc6120cafb.scope - libcontainer container 39698061410aa2f37b144363b3758e7f3d60329044d10440960e1cdc6120cafb. Jul 14 21:50:37.367084 containerd[1433]: time="2025-07-14T21:50:37.367034899Z" level=info msg="StartContainer for \"39698061410aa2f37b144363b3758e7f3d60329044d10440960e1cdc6120cafb\" returns successfully" Jul 14 21:50:37.508978 containerd[1433]: time="2025-07-14T21:50:37.508936253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8k2xc,Uid:f43cc9e3-b242-420d-9bf7-391730f9adf0,Namespace:tigera-operator,Attempt:0,}" Jul 14 21:50:37.534890 containerd[1433]: time="2025-07-14T21:50:37.534804361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:50:37.534890 containerd[1433]: time="2025-07-14T21:50:37.534855921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:50:37.535760 containerd[1433]: time="2025-07-14T21:50:37.535198803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:37.535760 containerd[1433]: time="2025-07-14T21:50:37.535537004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:37.550706 systemd[1]: Started cri-containerd-40e03450dfff67d8d90a6455322a9043c49991a851a406cc986bbd248237cb4b.scope - libcontainer container 40e03450dfff67d8d90a6455322a9043c49991a851a406cc986bbd248237cb4b. Jul 14 21:50:37.586912 containerd[1433]: time="2025-07-14T21:50:37.586874739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8k2xc,Uid:f43cc9e3-b242-420d-9bf7-391730f9adf0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"40e03450dfff67d8d90a6455322a9043c49991a851a406cc986bbd248237cb4b\"" Jul 14 21:50:37.589672 containerd[1433]: time="2025-07-14T21:50:37.588750387Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 21:50:38.014201 kubelet[2442]: E0714 21:50:38.014169 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:38.024018 kubelet[2442]: I0714 21:50:38.023957 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kxhz9" podStartSLOduration=2.023941523 podStartE2EDuration="2.023941523s" podCreationTimestamp="2025-07-14 21:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:50:38.023734523 +0000 UTC m=+7.145637346" watchObservedRunningTime="2025-07-14 21:50:38.023941523 +0000 UTC m=+7.145844346" Jul 14 21:50:38.697815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238258284.mount: Deactivated successfully. Jul 14 21:50:39.020512 containerd[1433]: time="2025-07-14T21:50:39.020228149Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:39.023306 containerd[1433]: time="2025-07-14T21:50:39.021087793Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 14 21:50:39.023306 containerd[1433]: time="2025-07-14T21:50:39.021967796Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:39.024546 containerd[1433]: time="2025-07-14T21:50:39.024504525Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:39.025434 containerd[1433]: time="2025-07-14T21:50:39.025393688Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.435499136s" Jul 14 21:50:39.025530 containerd[1433]: time="2025-07-14T21:50:39.025514969Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 14 21:50:39.027336 containerd[1433]: time="2025-07-14T21:50:39.027307815Z" level=info msg="CreateContainer within sandbox \"40e03450dfff67d8d90a6455322a9043c49991a851a406cc986bbd248237cb4b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 21:50:39.039565 containerd[1433]: time="2025-07-14T21:50:39.039508860Z" level=info msg="CreateContainer within sandbox \"40e03450dfff67d8d90a6455322a9043c49991a851a406cc986bbd248237cb4b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5e9eb3886976cde79156bd8e4ef4481c47e642827d01ea95b8c3a8640b9eaa4c\"" Jul 14 21:50:39.040834 containerd[1433]: time="2025-07-14T21:50:39.040026942Z" level=info msg="StartContainer for \"5e9eb3886976cde79156bd8e4ef4481c47e642827d01ea95b8c3a8640b9eaa4c\"" Jul 14 21:50:39.066766 systemd[1]: Started cri-containerd-5e9eb3886976cde79156bd8e4ef4481c47e642827d01ea95b8c3a8640b9eaa4c.scope - libcontainer container 5e9eb3886976cde79156bd8e4ef4481c47e642827d01ea95b8c3a8640b9eaa4c. Jul 14 21:50:39.094187 containerd[1433]: time="2025-07-14T21:50:39.094136581Z" level=info msg="StartContainer for \"5e9eb3886976cde79156bd8e4ef4481c47e642827d01ea95b8c3a8640b9eaa4c\" returns successfully" Jul 14 21:50:40.026842 kubelet[2442]: I0714 21:50:40.026640 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-8k2xc" podStartSLOduration=1.588363179 podStartE2EDuration="3.026622087s" podCreationTimestamp="2025-07-14 21:50:37 +0000 UTC" firstStartedPulling="2025-07-14 21:50:37.587926503 +0000 UTC m=+6.709829326" lastFinishedPulling="2025-07-14 21:50:39.026185411 +0000 UTC m=+8.148088234" observedRunningTime="2025-07-14 21:50:40.026226526 +0000 UTC m=+9.148129349" watchObservedRunningTime="2025-07-14 21:50:40.026622087 +0000 UTC m=+9.148524910" Jul 14 21:50:41.554727 kubelet[2442]: E0714 21:50:41.552524 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:42.028375 kubelet[2442]: E0714 21:50:42.028328 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:43.822346 kubelet[2442]: E0714 21:50:43.822303 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:44.482034 sudo[1605]: pam_unix(sudo:session): session closed for user root Jul 14 21:50:44.487457 sshd[1602]: pam_unix(sshd:session): session closed for user core Jul 14 21:50:44.490159 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:33612.service: Deactivated successfully. Jul 14 21:50:44.493169 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 21:50:44.493373 systemd[1]: session-7.scope: Consumed 6.493s CPU time, 152.6M memory peak, 0B memory swap peak. Jul 14 21:50:44.499004 systemd-logind[1412]: Session 7 logged out. Waiting for processes to exit. Jul 14 21:50:44.501048 systemd-logind[1412]: Removed session 7. Jul 14 21:50:44.798122 kubelet[2442]: E0714 21:50:44.798012 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:47.575161 update_engine[1418]: I20250714 21:50:47.574584 1418 update_attempter.cc:509] Updating boot flags... Jul 14 21:50:47.651589 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2860) Jul 14 21:50:47.713591 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2860) Jul 14 21:50:47.750588 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2860) Jul 14 21:50:48.639523 systemd[1]: Created slice kubepods-besteffort-pod9b3253b4_6c02_4122_a720_b5eee38dfb54.slice - libcontainer container kubepods-besteffort-pod9b3253b4_6c02_4122_a720_b5eee38dfb54.slice. Jul 14 21:50:48.688747 kubelet[2442]: I0714 21:50:48.688664 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b3253b4-6c02-4122-a720-b5eee38dfb54-tigera-ca-bundle\") pod \"calico-typha-76c6bff9f5-jlk8p\" (UID: \"9b3253b4-6c02-4122-a720-b5eee38dfb54\") " pod="calico-system/calico-typha-76c6bff9f5-jlk8p" Jul 14 21:50:48.688747 kubelet[2442]: I0714 21:50:48.688743 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9b3253b4-6c02-4122-a720-b5eee38dfb54-typha-certs\") pod \"calico-typha-76c6bff9f5-jlk8p\" (UID: \"9b3253b4-6c02-4122-a720-b5eee38dfb54\") " pod="calico-system/calico-typha-76c6bff9f5-jlk8p" Jul 14 21:50:48.689194 kubelet[2442]: I0714 21:50:48.688767 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzwf\" (UniqueName: \"kubernetes.io/projected/9b3253b4-6c02-4122-a720-b5eee38dfb54-kube-api-access-8kzwf\") pod \"calico-typha-76c6bff9f5-jlk8p\" (UID: \"9b3253b4-6c02-4122-a720-b5eee38dfb54\") " pod="calico-system/calico-typha-76c6bff9f5-jlk8p" Jul 14 21:50:48.804803 systemd[1]: Created slice kubepods-besteffort-pod65456eb6_69cd_4a23_9f13_52346fd51b1a.slice - libcontainer container kubepods-besteffort-pod65456eb6_69cd_4a23_9f13_52346fd51b1a.slice. Jul 14 21:50:48.890252 kubelet[2442]: I0714 21:50:48.890141 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/65456eb6-69cd-4a23-9f13-52346fd51b1a-cni-log-dir\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890680 kubelet[2442]: I0714 21:50:48.890394 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/65456eb6-69cd-4a23-9f13-52346fd51b1a-node-certs\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890680 kubelet[2442]: I0714 21:50:48.890441 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65456eb6-69cd-4a23-9f13-52346fd51b1a-tigera-ca-bundle\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890680 kubelet[2442]: I0714 21:50:48.890461 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/65456eb6-69cd-4a23-9f13-52346fd51b1a-lib-modules\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890680 kubelet[2442]: I0714 21:50:48.890477 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cztr\" (UniqueName: \"kubernetes.io/projected/65456eb6-69cd-4a23-9f13-52346fd51b1a-kube-api-access-7cztr\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890680 kubelet[2442]: I0714 21:50:48.890499 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/65456eb6-69cd-4a23-9f13-52346fd51b1a-xtables-lock\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890836 kubelet[2442]: I0714 21:50:48.890515 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/65456eb6-69cd-4a23-9f13-52346fd51b1a-policysync\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890836 kubelet[2442]: I0714 21:50:48.890532 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/65456eb6-69cd-4a23-9f13-52346fd51b1a-flexvol-driver-host\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890836 kubelet[2442]: I0714 21:50:48.890591 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65456eb6-69cd-4a23-9f13-52346fd51b1a-var-lib-calico\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890836 kubelet[2442]: I0714 21:50:48.890625 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/65456eb6-69cd-4a23-9f13-52346fd51b1a-cni-bin-dir\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890836 kubelet[2442]: I0714 21:50:48.890642 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/65456eb6-69cd-4a23-9f13-52346fd51b1a-cni-net-dir\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.890940 kubelet[2442]: I0714 21:50:48.890666 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/65456eb6-69cd-4a23-9f13-52346fd51b1a-var-run-calico\") pod \"calico-node-9jxcm\" (UID: \"65456eb6-69cd-4a23-9f13-52346fd51b1a\") " pod="calico-system/calico-node-9jxcm" Jul 14 21:50:48.945543 kubelet[2442]: E0714 21:50:48.945276 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:48.945797 containerd[1433]: time="2025-07-14T21:50:48.945759884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c6bff9f5-jlk8p,Uid:9b3253b4-6c02-4122-a720-b5eee38dfb54,Namespace:calico-system,Attempt:0,}" Jul 14 21:50:48.971040 containerd[1433]: time="2025-07-14T21:50:48.970936056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:50:48.971040 containerd[1433]: time="2025-07-14T21:50:48.970992656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:50:48.971704 containerd[1433]: time="2025-07-14T21:50:48.971025256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:48.971704 containerd[1433]: time="2025-07-14T21:50:48.971148297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:48.995072 kubelet[2442]: E0714 21:50:48.994999 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.995072 kubelet[2442]: W0714 21:50:48.995024 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.995072 kubelet[2442]: E0714 21:50:48.995055 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.995311 kubelet[2442]: E0714 21:50:48.995234 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.995311 kubelet[2442]: W0714 21:50:48.995242 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.995311 kubelet[2442]: E0714 21:50:48.995271 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.995409 kubelet[2442]: E0714 21:50:48.995403 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.995448 kubelet[2442]: W0714 21:50:48.995410 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.995659 kubelet[2442]: E0714 21:50:48.995598 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.995659 kubelet[2442]: W0714 21:50:48.995606 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.995815 kubelet[2442]: E0714 21:50:48.995744 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.995815 kubelet[2442]: W0714 21:50:48.995750 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.995917 kubelet[2442]: E0714 21:50:48.995871 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.995917 kubelet[2442]: W0714 21:50:48.995877 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.996038 kubelet[2442]: E0714 21:50:48.995997 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.996038 kubelet[2442]: W0714 21:50:48.996007 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.996093 kubelet[2442]: E0714 21:50:48.996034 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.996093 kubelet[2442]: E0714 21:50:48.996059 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.996093 kubelet[2442]: E0714 21:50:48.996068 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.996093 kubelet[2442]: E0714 21:50:48.996076 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.996334 kubelet[2442]: E0714 21:50:48.996125 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.996334 kubelet[2442]: E0714 21:50:48.996141 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.996334 kubelet[2442]: W0714 21:50:48.996149 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.996334 kubelet[2442]: E0714 21:50:48.996174 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.996609 kubelet[2442]: E0714 21:50:48.996597 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.996609 kubelet[2442]: W0714 21:50:48.996608 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.996776 kubelet[2442]: E0714 21:50:48.996639 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.996864 kubelet[2442]: E0714 21:50:48.996788 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.996864 kubelet[2442]: W0714 21:50:48.996797 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.996864 kubelet[2442]: E0714 21:50:48.996824 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.997820 systemd[1]: Started cri-containerd-d93757ba9c376c5bdcae0685713181ee312e647e186a1fbd5c70c9866490c119.scope - libcontainer container d93757ba9c376c5bdcae0685713181ee312e647e186a1fbd5c70c9866490c119. Jul 14 21:50:48.998810 kubelet[2442]: E0714 21:50:48.998787 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.998810 kubelet[2442]: W0714 21:50:48.998807 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.998938 kubelet[2442]: E0714 21:50:48.998827 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.999352 kubelet[2442]: E0714 21:50:48.999317 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.999352 kubelet[2442]: W0714 21:50:48.999333 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.999648 kubelet[2442]: E0714 21:50:48.999524 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.999648 kubelet[2442]: W0714 21:50:48.999537 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:48.999648 kubelet[2442]: E0714 21:50:48.999599 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.999648 kubelet[2442]: E0714 21:50:48.999646 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:48.999771 kubelet[2442]: E0714 21:50:48.999691 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:48.999771 kubelet[2442]: W0714 21:50:48.999699 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.000026 kubelet[2442]: E0714 21:50:48.999854 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.000076 kubelet[2442]: E0714 21:50:49.000041 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.000076 kubelet[2442]: W0714 21:50:49.000051 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.000168 kubelet[2442]: E0714 21:50:49.000153 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.001026 kubelet[2442]: E0714 21:50:49.001004 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.001026 kubelet[2442]: W0714 21:50:49.001022 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.001181 kubelet[2442]: E0714 21:50:49.001086 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.002325 kubelet[2442]: E0714 21:50:49.002308 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.002325 kubelet[2442]: W0714 21:50:49.002324 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.002423 kubelet[2442]: E0714 21:50:49.002354 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.002606 kubelet[2442]: E0714 21:50:49.002591 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.002606 kubelet[2442]: W0714 21:50:49.002605 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.002739 kubelet[2442]: E0714 21:50:49.002687 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.002819 kubelet[2442]: E0714 21:50:49.002806 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.002819 kubelet[2442]: W0714 21:50:49.002819 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.002954 kubelet[2442]: E0714 21:50:49.002874 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.002993 kubelet[2442]: E0714 21:50:49.002978 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.002993 kubelet[2442]: W0714 21:50:49.002989 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.003154 kubelet[2442]: E0714 21:50:49.003036 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.003154 kubelet[2442]: E0714 21:50:49.003145 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.003154 kubelet[2442]: W0714 21:50:49.003153 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.003282 kubelet[2442]: E0714 21:50:49.003243 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.003320 kubelet[2442]: E0714 21:50:49.003302 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.003320 kubelet[2442]: W0714 21:50:49.003309 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.003510 kubelet[2442]: E0714 21:50:49.003446 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.003510 kubelet[2442]: W0714 21:50:49.003456 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.003510 kubelet[2442]: E0714 21:50:49.003446 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.003510 kubelet[2442]: E0714 21:50:49.003484 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.003714 kubelet[2442]: E0714 21:50:49.003601 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.003714 kubelet[2442]: W0714 21:50:49.003608 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.003714 kubelet[2442]: E0714 21:50:49.003638 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.003785 kubelet[2442]: E0714 21:50:49.003779 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.003807 kubelet[2442]: W0714 21:50:49.003788 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.003827 kubelet[2442]: E0714 21:50:49.003806 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.004055 kubelet[2442]: E0714 21:50:49.004038 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.004105 kubelet[2442]: W0714 21:50:49.004055 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.004105 kubelet[2442]: E0714 21:50:49.004077 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.004304 kubelet[2442]: E0714 21:50:49.004292 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.004304 kubelet[2442]: W0714 21:50:49.004304 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.004360 kubelet[2442]: E0714 21:50:49.004318 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.004774 kubelet[2442]: E0714 21:50:49.004657 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.004774 kubelet[2442]: W0714 21:50:49.004673 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.004774 kubelet[2442]: E0714 21:50:49.004709 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.005178 kubelet[2442]: E0714 21:50:49.005131 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.005178 kubelet[2442]: W0714 21:50:49.005145 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.006001 kubelet[2442]: E0714 21:50:49.005159 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.006298 kubelet[2442]: E0714 21:50:49.006281 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.006400 kubelet[2442]: W0714 21:50:49.006386 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.006493 kubelet[2442]: E0714 21:50:49.006479 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.014773 kubelet[2442]: E0714 21:50:49.014751 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.014873 kubelet[2442]: W0714 21:50:49.014859 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.014968 kubelet[2442]: E0714 21:50:49.014955 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.041263 kubelet[2442]: E0714 21:50:49.041211 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9n94f" podUID="c6dffae0-199e-4860-b66d-240601db16b1" Jul 14 21:50:49.048026 containerd[1433]: time="2025-07-14T21:50:49.047989809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c6bff9f5-jlk8p,Uid:9b3253b4-6c02-4122-a720-b5eee38dfb54,Namespace:calico-system,Attempt:0,} returns sandbox id \"d93757ba9c376c5bdcae0685713181ee312e647e186a1fbd5c70c9866490c119\"" Jul 14 21:50:49.048776 kubelet[2442]: E0714 21:50:49.048753 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:49.050093 containerd[1433]: time="2025-07-14T21:50:49.050033413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 21:50:49.080123 kubelet[2442]: E0714 21:50:49.080090 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.080123 kubelet[2442]: W0714 21:50:49.080112 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.080123 kubelet[2442]: E0714 21:50:49.080132 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.080347 kubelet[2442]: E0714 21:50:49.080334 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.080347 kubelet[2442]: W0714 21:50:49.080347 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.080408 kubelet[2442]: E0714 21:50:49.080356 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.080533 kubelet[2442]: E0714 21:50:49.080522 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.080591 kubelet[2442]: W0714 21:50:49.080533 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.080591 kubelet[2442]: E0714 21:50:49.080542 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.080718 kubelet[2442]: E0714 21:50:49.080706 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.080718 kubelet[2442]: W0714 21:50:49.080717 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.080773 kubelet[2442]: E0714 21:50:49.080725 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.080889 kubelet[2442]: E0714 21:50:49.080871 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.080889 kubelet[2442]: W0714 21:50:49.080878 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.080889 kubelet[2442]: E0714 21:50:49.080886 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.081019 kubelet[2442]: E0714 21:50:49.081008 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.081019 kubelet[2442]: W0714 21:50:49.081018 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.081076 kubelet[2442]: E0714 21:50:49.081025 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.081157 kubelet[2442]: E0714 21:50:49.081146 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.081157 kubelet[2442]: W0714 21:50:49.081156 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.081218 kubelet[2442]: E0714 21:50:49.081163 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.081305 kubelet[2442]: E0714 21:50:49.081295 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.081305 kubelet[2442]: W0714 21:50:49.081304 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.081359 kubelet[2442]: E0714 21:50:49.081312 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.081483 kubelet[2442]: E0714 21:50:49.081472 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.081483 kubelet[2442]: W0714 21:50:49.081482 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.081544 kubelet[2442]: E0714 21:50:49.081489 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.081638 kubelet[2442]: E0714 21:50:49.081627 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.081680 kubelet[2442]: W0714 21:50:49.081640 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.081680 kubelet[2442]: E0714 21:50:49.081648 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.081789 kubelet[2442]: E0714 21:50:49.081778 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.081789 kubelet[2442]: W0714 21:50:49.081788 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.081849 kubelet[2442]: E0714 21:50:49.081795 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.081934 kubelet[2442]: E0714 21:50:49.081923 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.081934 kubelet[2442]: W0714 21:50:49.081934 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.081997 kubelet[2442]: E0714 21:50:49.081941 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.082083 kubelet[2442]: E0714 21:50:49.082073 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.082083 kubelet[2442]: W0714 21:50:49.082082 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.082141 kubelet[2442]: E0714 21:50:49.082089 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.082225 kubelet[2442]: E0714 21:50:49.082216 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.082255 kubelet[2442]: W0714 21:50:49.082227 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.082255 kubelet[2442]: E0714 21:50:49.082234 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.082367 kubelet[2442]: E0714 21:50:49.082357 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.082367 kubelet[2442]: W0714 21:50:49.082366 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.082433 kubelet[2442]: E0714 21:50:49.082374 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.082518 kubelet[2442]: E0714 21:50:49.082508 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.082518 kubelet[2442]: W0714 21:50:49.082518 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.082588 kubelet[2442]: E0714 21:50:49.082525 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.082689 kubelet[2442]: E0714 21:50:49.082678 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.082689 kubelet[2442]: W0714 21:50:49.082689 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.082755 kubelet[2442]: E0714 21:50:49.082696 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.082834 kubelet[2442]: E0714 21:50:49.082824 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.082834 kubelet[2442]: W0714 21:50:49.082833 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.082888 kubelet[2442]: E0714 21:50:49.082840 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.082992 kubelet[2442]: E0714 21:50:49.082981 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.082992 kubelet[2442]: W0714 21:50:49.082992 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.083050 kubelet[2442]: E0714 21:50:49.083000 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.083152 kubelet[2442]: E0714 21:50:49.083142 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.083152 kubelet[2442]: W0714 21:50:49.083151 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.083200 kubelet[2442]: E0714 21:50:49.083159 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.091611 kubelet[2442]: E0714 21:50:49.091583 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.091611 kubelet[2442]: W0714 21:50:49.091601 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.091611 kubelet[2442]: E0714 21:50:49.091614 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.091723 kubelet[2442]: I0714 21:50:49.091642 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c6dffae0-199e-4860-b66d-240601db16b1-varrun\") pod \"csi-node-driver-9n94f\" (UID: \"c6dffae0-199e-4860-b66d-240601db16b1\") " pod="calico-system/csi-node-driver-9n94f" Jul 14 21:50:49.091856 kubelet[2442]: E0714 21:50:49.091833 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.091856 kubelet[2442]: W0714 21:50:49.091845 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.091920 kubelet[2442]: E0714 21:50:49.091860 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.091920 kubelet[2442]: I0714 21:50:49.091875 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c6dffae0-199e-4860-b66d-240601db16b1-registration-dir\") pod \"csi-node-driver-9n94f\" (UID: \"c6dffae0-199e-4860-b66d-240601db16b1\") " pod="calico-system/csi-node-driver-9n94f" Jul 14 21:50:49.092079 kubelet[2442]: E0714 21:50:49.092056 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.092079 kubelet[2442]: W0714 21:50:49.092076 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.092127 kubelet[2442]: E0714 21:50:49.092092 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.092256 kubelet[2442]: E0714 21:50:49.092245 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.092283 kubelet[2442]: W0714 21:50:49.092256 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.092283 kubelet[2442]: E0714 21:50:49.092269 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.092441 kubelet[2442]: E0714 21:50:49.092431 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.092467 kubelet[2442]: W0714 21:50:49.092442 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.092467 kubelet[2442]: E0714 21:50:49.092454 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.092509 kubelet[2442]: I0714 21:50:49.092473 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c6dffae0-199e-4860-b66d-240601db16b1-kubelet-dir\") pod \"csi-node-driver-9n94f\" (UID: \"c6dffae0-199e-4860-b66d-240601db16b1\") " pod="calico-system/csi-node-driver-9n94f" Jul 14 21:50:49.092655 kubelet[2442]: E0714 21:50:49.092642 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.092683 kubelet[2442]: W0714 21:50:49.092655 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.092683 kubelet[2442]: E0714 21:50:49.092668 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.092722 kubelet[2442]: I0714 21:50:49.092682 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c6dffae0-199e-4860-b66d-240601db16b1-socket-dir\") pod \"csi-node-driver-9n94f\" (UID: \"c6dffae0-199e-4860-b66d-240601db16b1\") " pod="calico-system/csi-node-driver-9n94f" Jul 14 21:50:49.092857 kubelet[2442]: E0714 21:50:49.092844 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.092881 kubelet[2442]: W0714 21:50:49.092856 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.092881 kubelet[2442]: E0714 21:50:49.092870 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.092966 kubelet[2442]: I0714 21:50:49.092884 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh862\" (UniqueName: \"kubernetes.io/projected/c6dffae0-199e-4860-b66d-240601db16b1-kube-api-access-hh862\") pod \"csi-node-driver-9n94f\" (UID: \"c6dffae0-199e-4860-b66d-240601db16b1\") " pod="calico-system/csi-node-driver-9n94f" Jul 14 21:50:49.093062 kubelet[2442]: E0714 21:50:49.093051 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.093084 kubelet[2442]: W0714 21:50:49.093061 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.093108 kubelet[2442]: E0714 21:50:49.093086 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.093263 kubelet[2442]: E0714 21:50:49.093254 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.093290 kubelet[2442]: W0714 21:50:49.093263 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.093290 kubelet[2442]: E0714 21:50:49.093281 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.093408 kubelet[2442]: E0714 21:50:49.093398 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.093439 kubelet[2442]: W0714 21:50:49.093408 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.093439 kubelet[2442]: E0714 21:50:49.093426 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.093694 kubelet[2442]: E0714 21:50:49.093681 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.093694 kubelet[2442]: W0714 21:50:49.093692 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.093754 kubelet[2442]: E0714 21:50:49.093704 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.093859 kubelet[2442]: E0714 21:50:49.093850 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.093882 kubelet[2442]: W0714 21:50:49.093859 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.093882 kubelet[2442]: E0714 21:50:49.093867 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.094057 kubelet[2442]: E0714 21:50:49.094044 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.094057 kubelet[2442]: W0714 21:50:49.094053 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.094117 kubelet[2442]: E0714 21:50:49.094061 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.094231 kubelet[2442]: E0714 21:50:49.094219 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.094231 kubelet[2442]: W0714 21:50:49.094229 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.094279 kubelet[2442]: E0714 21:50:49.094236 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.094391 kubelet[2442]: E0714 21:50:49.094381 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.094425 kubelet[2442]: W0714 21:50:49.094391 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.094425 kubelet[2442]: E0714 21:50:49.094406 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.113327 containerd[1433]: time="2025-07-14T21:50:49.113286015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9jxcm,Uid:65456eb6-69cd-4a23-9f13-52346fd51b1a,Namespace:calico-system,Attempt:0,}" Jul 14 21:50:49.138211 containerd[1433]: time="2025-07-14T21:50:49.138099983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:50:49.138211 containerd[1433]: time="2025-07-14T21:50:49.138147983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:50:49.138211 containerd[1433]: time="2025-07-14T21:50:49.138159023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:49.138440 containerd[1433]: time="2025-07-14T21:50:49.138240983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:50:49.161807 systemd[1]: Started cri-containerd-7f1279c9aa746f3bfb5bab09889b28742d5a4a4882e601d7febfacc215b42fac.scope - libcontainer container 7f1279c9aa746f3bfb5bab09889b28742d5a4a4882e601d7febfacc215b42fac. Jul 14 21:50:49.191592 containerd[1433]: time="2025-07-14T21:50:49.191512966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9jxcm,Uid:65456eb6-69cd-4a23-9f13-52346fd51b1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7f1279c9aa746f3bfb5bab09889b28742d5a4a4882e601d7febfacc215b42fac\"" Jul 14 21:50:49.194224 kubelet[2442]: E0714 21:50:49.194179 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.194224 kubelet[2442]: W0714 21:50:49.194207 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.194224 kubelet[2442]: E0714 21:50:49.194225 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.194788 kubelet[2442]: E0714 21:50:49.194438 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.194788 kubelet[2442]: W0714 21:50:49.194451 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.194788 kubelet[2442]: E0714 21:50:49.194466 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.194788 kubelet[2442]: E0714 21:50:49.194671 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.194788 kubelet[2442]: W0714 21:50:49.194680 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.194788 kubelet[2442]: E0714 21:50:49.194695 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.195909 kubelet[2442]: E0714 21:50:49.195034 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.195909 kubelet[2442]: W0714 21:50:49.195083 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.195909 kubelet[2442]: E0714 21:50:49.195103 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.195909 kubelet[2442]: E0714 21:50:49.195742 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.195909 kubelet[2442]: W0714 21:50:49.195754 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.195909 kubelet[2442]: E0714 21:50:49.195767 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.196098 kubelet[2442]: E0714 21:50:49.195993 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.196098 kubelet[2442]: W0714 21:50:49.196050 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.196218 kubelet[2442]: E0714 21:50:49.196168 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.196388 kubelet[2442]: E0714 21:50:49.196369 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.196388 kubelet[2442]: W0714 21:50:49.196384 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.196504 kubelet[2442]: E0714 21:50:49.196421 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.196618 kubelet[2442]: E0714 21:50:49.196603 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.196618 kubelet[2442]: W0714 21:50:49.196616 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.196762 kubelet[2442]: E0714 21:50:49.196646 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.197006 kubelet[2442]: E0714 21:50:49.196973 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.197036 kubelet[2442]: W0714 21:50:49.197003 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.197087 kubelet[2442]: E0714 21:50:49.197066 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.197297 kubelet[2442]: E0714 21:50:49.197281 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.197297 kubelet[2442]: W0714 21:50:49.197295 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.197364 kubelet[2442]: E0714 21:50:49.197313 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.197597 kubelet[2442]: E0714 21:50:49.197580 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.197597 kubelet[2442]: W0714 21:50:49.197595 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.197672 kubelet[2442]: E0714 21:50:49.197612 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.198246 kubelet[2442]: E0714 21:50:49.198220 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.198246 kubelet[2442]: W0714 21:50:49.198236 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.198681 kubelet[2442]: E0714 21:50:49.198628 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.199190 kubelet[2442]: E0714 21:50:49.199165 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.199190 kubelet[2442]: W0714 21:50:49.199182 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.199190 kubelet[2442]: E0714 21:50:49.199217 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.200172 kubelet[2442]: E0714 21:50:49.200146 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.200172 kubelet[2442]: W0714 21:50:49.200161 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.200268 kubelet[2442]: E0714 21:50:49.200249 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.201081 kubelet[2442]: E0714 21:50:49.201059 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.201081 kubelet[2442]: W0714 21:50:49.201075 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.201177 kubelet[2442]: E0714 21:50:49.201157 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.201629 kubelet[2442]: E0714 21:50:49.201610 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.201629 kubelet[2442]: W0714 21:50:49.201626 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.201723 kubelet[2442]: E0714 21:50:49.201680 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.202609 kubelet[2442]: E0714 21:50:49.202332 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.202609 kubelet[2442]: W0714 21:50:49.202517 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.202609 kubelet[2442]: E0714 21:50:49.202574 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.202913 kubelet[2442]: E0714 21:50:49.202886 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.202913 kubelet[2442]: W0714 21:50:49.202902 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.202913 kubelet[2442]: E0714 21:50:49.202937 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.203094 kubelet[2442]: E0714 21:50:49.203070 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.203094 kubelet[2442]: W0714 21:50:49.203086 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.203157 kubelet[2442]: E0714 21:50:49.203130 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.203290 kubelet[2442]: E0714 21:50:49.203276 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.203290 kubelet[2442]: W0714 21:50:49.203287 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.203353 kubelet[2442]: E0714 21:50:49.203306 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.203504 kubelet[2442]: E0714 21:50:49.203489 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.203504 kubelet[2442]: W0714 21:50:49.203501 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.203590 kubelet[2442]: E0714 21:50:49.203515 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.204595 kubelet[2442]: E0714 21:50:49.203715 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.204595 kubelet[2442]: W0714 21:50:49.203727 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.204595 kubelet[2442]: E0714 21:50:49.203742 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.205158 kubelet[2442]: E0714 21:50:49.205136 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.205158 kubelet[2442]: W0714 21:50:49.205152 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.205257 kubelet[2442]: E0714 21:50:49.205171 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.206371 kubelet[2442]: E0714 21:50:49.206352 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.206371 kubelet[2442]: W0714 21:50:49.206367 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.206482 kubelet[2442]: E0714 21:50:49.206381 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.206829 kubelet[2442]: E0714 21:50:49.206802 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.206829 kubelet[2442]: W0714 21:50:49.206817 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.206829 kubelet[2442]: E0714 21:50:49.206829 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:49.222856 kubelet[2442]: E0714 21:50:49.222689 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:49.222856 kubelet[2442]: W0714 21:50:49.222712 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:49.222856 kubelet[2442]: E0714 21:50:49.222732 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:50.094844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3577446827.mount: Deactivated successfully. Jul 14 21:50:50.818964 containerd[1433]: time="2025-07-14T21:50:50.818919368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:50.819437 containerd[1433]: time="2025-07-14T21:50:50.819351489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 14 21:50:50.820271 containerd[1433]: time="2025-07-14T21:50:50.820236850Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:50.822286 containerd[1433]: time="2025-07-14T21:50:50.822256014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:50.822915 containerd[1433]: time="2025-07-14T21:50:50.822885375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.772792122s" Jul 14 21:50:50.822991 containerd[1433]: time="2025-07-14T21:50:50.822918295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 14 21:50:50.824070 containerd[1433]: time="2025-07-14T21:50:50.824045337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 21:50:50.832686 containerd[1433]: time="2025-07-14T21:50:50.832645113Z" level=info msg="CreateContainer within sandbox \"d93757ba9c376c5bdcae0685713181ee312e647e186a1fbd5c70c9866490c119\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 21:50:50.845384 containerd[1433]: time="2025-07-14T21:50:50.845348456Z" level=info msg="CreateContainer within sandbox \"d93757ba9c376c5bdcae0685713181ee312e647e186a1fbd5c70c9866490c119\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4d5b405a8ee7cce4934b19c7c52e04cb02f0c6f128a8f16d44fb0373c97179dc\"" Jul 14 21:50:50.848972 containerd[1433]: time="2025-07-14T21:50:50.848945462Z" level=info msg="StartContainer for \"4d5b405a8ee7cce4934b19c7c52e04cb02f0c6f128a8f16d44fb0373c97179dc\"" Jul 14 21:50:50.885731 systemd[1]: Started cri-containerd-4d5b405a8ee7cce4934b19c7c52e04cb02f0c6f128a8f16d44fb0373c97179dc.scope - libcontainer container 4d5b405a8ee7cce4934b19c7c52e04cb02f0c6f128a8f16d44fb0373c97179dc. Jul 14 21:50:50.928768 containerd[1433]: time="2025-07-14T21:50:50.928674567Z" level=info msg="StartContainer for \"4d5b405a8ee7cce4934b19c7c52e04cb02f0c6f128a8f16d44fb0373c97179dc\" returns successfully" Jul 14 21:50:50.988911 kubelet[2442]: E0714 21:50:50.988750 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9n94f" podUID="c6dffae0-199e-4860-b66d-240601db16b1" Jul 14 21:50:51.055607 kubelet[2442]: E0714 21:50:51.055582 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:51.097345 kubelet[2442]: E0714 21:50:51.097161 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.097345 kubelet[2442]: W0714 21:50:51.097184 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.097345 kubelet[2442]: E0714 21:50:51.097205 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.097711 kubelet[2442]: E0714 21:50:51.097426 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.097711 kubelet[2442]: W0714 21:50:51.097436 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.097711 kubelet[2442]: E0714 21:50:51.097476 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.098069 kubelet[2442]: E0714 21:50:51.098056 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.098137 kubelet[2442]: W0714 21:50:51.098125 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.098198 kubelet[2442]: E0714 21:50:51.098187 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.098528 kubelet[2442]: E0714 21:50:51.098457 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.098528 kubelet[2442]: W0714 21:50:51.098468 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.098528 kubelet[2442]: E0714 21:50:51.098479 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.098868 kubelet[2442]: E0714 21:50:51.098812 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.098868 kubelet[2442]: W0714 21:50:51.098822 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.098868 kubelet[2442]: E0714 21:50:51.098834 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.099161 kubelet[2442]: E0714 21:50:51.099105 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.099161 kubelet[2442]: W0714 21:50:51.099115 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.099161 kubelet[2442]: E0714 21:50:51.099125 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.099647 kubelet[2442]: E0714 21:50:51.099538 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.099647 kubelet[2442]: W0714 21:50:51.099549 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.099647 kubelet[2442]: E0714 21:50:51.099569 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.099822 kubelet[2442]: E0714 21:50:51.099810 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.099879 kubelet[2442]: W0714 21:50:51.099868 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.099931 kubelet[2442]: E0714 21:50:51.099921 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.100190 kubelet[2442]: E0714 21:50:51.100177 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.100358 kubelet[2442]: W0714 21:50:51.100255 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.100358 kubelet[2442]: E0714 21:50:51.100271 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.100491 kubelet[2442]: E0714 21:50:51.100479 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.100573 kubelet[2442]: W0714 21:50:51.100544 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.100713 kubelet[2442]: E0714 21:50:51.100628 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.100857 kubelet[2442]: E0714 21:50:51.100844 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.100918 kubelet[2442]: W0714 21:50:51.100908 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.101045 kubelet[2442]: E0714 21:50:51.100969 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.101205 kubelet[2442]: E0714 21:50:51.101193 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.101268 kubelet[2442]: W0714 21:50:51.101258 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.101389 kubelet[2442]: E0714 21:50:51.101311 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.101585 kubelet[2442]: E0714 21:50:51.101547 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.101689 kubelet[2442]: W0714 21:50:51.101656 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.101850 kubelet[2442]: E0714 21:50:51.101800 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.102166 kubelet[2442]: E0714 21:50:51.102070 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.102166 kubelet[2442]: W0714 21:50:51.102083 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.102166 kubelet[2442]: E0714 21:50:51.102093 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.104746 kubelet[2442]: E0714 21:50:51.104731 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.105014 kubelet[2442]: W0714 21:50:51.104833 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.105014 kubelet[2442]: E0714 21:50:51.104854 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.120843 kubelet[2442]: E0714 21:50:51.120808 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.120843 kubelet[2442]: W0714 21:50:51.120834 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.121030 kubelet[2442]: E0714 21:50:51.120855 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.121099 kubelet[2442]: E0714 21:50:51.121074 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.121099 kubelet[2442]: W0714 21:50:51.121088 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.121099 kubelet[2442]: E0714 21:50:51.121104 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.121392 kubelet[2442]: E0714 21:50:51.121372 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.121392 kubelet[2442]: W0714 21:50:51.121389 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.121477 kubelet[2442]: E0714 21:50:51.121403 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.121706 kubelet[2442]: E0714 21:50:51.121689 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.121706 kubelet[2442]: W0714 21:50:51.121704 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.122631 kubelet[2442]: E0714 21:50:51.122597 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.122877 kubelet[2442]: E0714 21:50:51.122861 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.122877 kubelet[2442]: W0714 21:50:51.122874 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.123014 kubelet[2442]: E0714 21:50:51.122938 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.123076 kubelet[2442]: E0714 21:50:51.123063 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.123076 kubelet[2442]: W0714 21:50:51.123074 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.124882 kubelet[2442]: E0714 21:50:51.123234 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.124882 kubelet[2442]: W0714 21:50:51.123243 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.124882 kubelet[2442]: E0714 21:50:51.123437 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.124882 kubelet[2442]: W0714 21:50:51.123445 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.124882 kubelet[2442]: E0714 21:50:51.123455 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.124882 kubelet[2442]: E0714 21:50:51.124231 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.124882 kubelet[2442]: W0714 21:50:51.124243 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.124882 kubelet[2442]: E0714 21:50:51.124255 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.124882 kubelet[2442]: E0714 21:50:51.124717 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.124882 kubelet[2442]: E0714 21:50:51.124751 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.125672 kubelet[2442]: E0714 21:50:51.125649 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.125672 kubelet[2442]: W0714 21:50:51.125665 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.125672 kubelet[2442]: E0714 21:50:51.125684 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.127471 kubelet[2442]: E0714 21:50:51.125927 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.127471 kubelet[2442]: W0714 21:50:51.125941 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.127471 kubelet[2442]: E0714 21:50:51.125956 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.127471 kubelet[2442]: E0714 21:50:51.126669 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.127471 kubelet[2442]: W0714 21:50:51.126683 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.127471 kubelet[2442]: E0714 21:50:51.126695 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.127471 kubelet[2442]: E0714 21:50:51.126945 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.127471 kubelet[2442]: W0714 21:50:51.126955 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.127471 kubelet[2442]: E0714 21:50:51.126965 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.129668 kubelet[2442]: E0714 21:50:51.129644 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.129668 kubelet[2442]: W0714 21:50:51.129662 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.129839 kubelet[2442]: E0714 21:50:51.129734 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.129915 kubelet[2442]: E0714 21:50:51.129900 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.129915 kubelet[2442]: W0714 21:50:51.129912 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.129978 kubelet[2442]: E0714 21:50:51.129923 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.130098 kubelet[2442]: E0714 21:50:51.130085 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.130098 kubelet[2442]: W0714 21:50:51.130098 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.130161 kubelet[2442]: E0714 21:50:51.130108 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.130316 kubelet[2442]: E0714 21:50:51.130302 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.130316 kubelet[2442]: W0714 21:50:51.130313 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.130374 kubelet[2442]: E0714 21:50:51.130322 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.130699 kubelet[2442]: E0714 21:50:51.130675 2442 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 21:50:51.130699 kubelet[2442]: W0714 21:50:51.130691 2442 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 21:50:51.130699 kubelet[2442]: E0714 21:50:51.130701 2442 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 21:50:51.911844 containerd[1433]: time="2025-07-14T21:50:51.911784722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:51.912834 containerd[1433]: time="2025-07-14T21:50:51.912328963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 14 21:50:51.914583 containerd[1433]: time="2025-07-14T21:50:51.914143566Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:51.917114 containerd[1433]: time="2025-07-14T21:50:51.917018851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:51.917879 containerd[1433]: time="2025-07-14T21:50:51.917831053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.093752236s" Jul 14 21:50:51.917879 containerd[1433]: time="2025-07-14T21:50:51.917866893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 14 21:50:51.921157 containerd[1433]: time="2025-07-14T21:50:51.921107818Z" level=info msg="CreateContainer within sandbox \"7f1279c9aa746f3bfb5bab09889b28742d5a4a4882e601d7febfacc215b42fac\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 21:50:51.948318 containerd[1433]: time="2025-07-14T21:50:51.948264704Z" level=info msg="CreateContainer within sandbox \"7f1279c9aa746f3bfb5bab09889b28742d5a4a4882e601d7febfacc215b42fac\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d26632c1aaad8540a86815c103e24bab284e0366beb0ce88357f9e93734297a\"" Jul 14 21:50:51.949167 containerd[1433]: time="2025-07-14T21:50:51.949082186Z" level=info msg="StartContainer for \"5d26632c1aaad8540a86815c103e24bab284e0366beb0ce88357f9e93734297a\"" Jul 14 21:50:51.988728 systemd[1]: Started cri-containerd-5d26632c1aaad8540a86815c103e24bab284e0366beb0ce88357f9e93734297a.scope - libcontainer container 5d26632c1aaad8540a86815c103e24bab284e0366beb0ce88357f9e93734297a. Jul 14 21:50:52.025125 containerd[1433]: time="2025-07-14T21:50:52.025074152Z" level=info msg="StartContainer for \"5d26632c1aaad8540a86815c103e24bab284e0366beb0ce88357f9e93734297a\" returns successfully" Jul 14 21:50:52.108252 kubelet[2442]: I0714 21:50:52.107196 2442 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:50:52.108252 kubelet[2442]: E0714 21:50:52.107754 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:52.111022 systemd[1]: cri-containerd-5d26632c1aaad8540a86815c103e24bab284e0366beb0ce88357f9e93734297a.scope: Deactivated successfully. Jul 14 21:50:52.132097 kubelet[2442]: I0714 21:50:52.131441 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76c6bff9f5-jlk8p" podStartSLOduration=2.357333876 podStartE2EDuration="4.131421681s" podCreationTimestamp="2025-07-14 21:50:48 +0000 UTC" firstStartedPulling="2025-07-14 21:50:49.049713332 +0000 UTC m=+18.171616115" lastFinishedPulling="2025-07-14 21:50:50.823801017 +0000 UTC m=+19.945703920" observedRunningTime="2025-07-14 21:50:51.069115253 +0000 UTC m=+20.191018076" watchObservedRunningTime="2025-07-14 21:50:52.131421681 +0000 UTC m=+21.253324504" Jul 14 21:50:52.170830 containerd[1433]: time="2025-07-14T21:50:52.170281663Z" level=info msg="shim disconnected" id=5d26632c1aaad8540a86815c103e24bab284e0366beb0ce88357f9e93734297a namespace=k8s.io Jul 14 21:50:52.170830 containerd[1433]: time="2025-07-14T21:50:52.170337543Z" level=warning msg="cleaning up after shim disconnected" id=5d26632c1aaad8540a86815c103e24bab284e0366beb0ce88357f9e93734297a namespace=k8s.io Jul 14 21:50:52.170830 containerd[1433]: time="2025-07-14T21:50:52.170348623Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:50:52.831469 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d26632c1aaad8540a86815c103e24bab284e0366beb0ce88357f9e93734297a-rootfs.mount: Deactivated successfully. Jul 14 21:50:52.987987 kubelet[2442]: E0714 21:50:52.987936 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9n94f" podUID="c6dffae0-199e-4860-b66d-240601db16b1" Jul 14 21:50:53.113696 containerd[1433]: time="2025-07-14T21:50:53.112218630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 21:50:55.005242 kubelet[2442]: E0714 21:50:55.005197 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9n94f" podUID="c6dffae0-199e-4860-b66d-240601db16b1" Jul 14 21:50:55.459862 containerd[1433]: time="2025-07-14T21:50:55.459752434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:55.460479 containerd[1433]: time="2025-07-14T21:50:55.460396595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 14 21:50:55.461049 containerd[1433]: time="2025-07-14T21:50:55.461014155Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:55.464664 containerd[1433]: time="2025-07-14T21:50:55.464511280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:50:55.465531 containerd[1433]: time="2025-07-14T21:50:55.465272361Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.353013091s" Jul 14 21:50:55.465531 containerd[1433]: time="2025-07-14T21:50:55.465318841Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 14 21:50:55.469617 containerd[1433]: time="2025-07-14T21:50:55.469546367Z" level=info msg="CreateContainer within sandbox \"7f1279c9aa746f3bfb5bab09889b28742d5a4a4882e601d7febfacc215b42fac\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 21:50:55.481691 containerd[1433]: time="2025-07-14T21:50:55.481543702Z" level=info msg="CreateContainer within sandbox \"7f1279c9aa746f3bfb5bab09889b28742d5a4a4882e601d7febfacc215b42fac\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"efb1d30fce8a5368be08ab4b17d92d9d2d76c49d61c2a6c94a4105a8d7f2b128\"" Jul 14 21:50:55.482203 containerd[1433]: time="2025-07-14T21:50:55.482177623Z" level=info msg="StartContainer for \"efb1d30fce8a5368be08ab4b17d92d9d2d76c49d61c2a6c94a4105a8d7f2b128\"" Jul 14 21:50:55.512328 systemd[1]: Started cri-containerd-efb1d30fce8a5368be08ab4b17d92d9d2d76c49d61c2a6c94a4105a8d7f2b128.scope - libcontainer container efb1d30fce8a5368be08ab4b17d92d9d2d76c49d61c2a6c94a4105a8d7f2b128. Jul 14 21:50:55.536764 containerd[1433]: time="2025-07-14T21:50:55.536723535Z" level=info msg="StartContainer for \"efb1d30fce8a5368be08ab4b17d92d9d2d76c49d61c2a6c94a4105a8d7f2b128\" returns successfully" Jul 14 21:50:56.100694 kubelet[2442]: I0714 21:50:56.100654 2442 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:50:56.101097 kubelet[2442]: E0714 21:50:56.101081 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:56.124896 kubelet[2442]: E0714 21:50:56.124808 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:56.191105 systemd[1]: cri-containerd-efb1d30fce8a5368be08ab4b17d92d9d2d76c49d61c2a6c94a4105a8d7f2b128.scope: Deactivated successfully. Jul 14 21:50:56.209238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efb1d30fce8a5368be08ab4b17d92d9d2d76c49d61c2a6c94a4105a8d7f2b128-rootfs.mount: Deactivated successfully. Jul 14 21:50:56.277572 kubelet[2442]: I0714 21:50:56.277527 2442 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 21:50:56.311009 containerd[1433]: time="2025-07-14T21:50:56.310939324Z" level=info msg="shim disconnected" id=efb1d30fce8a5368be08ab4b17d92d9d2d76c49d61c2a6c94a4105a8d7f2b128 namespace=k8s.io Jul 14 21:50:56.311009 containerd[1433]: time="2025-07-14T21:50:56.311008924Z" level=warning msg="cleaning up after shim disconnected" id=efb1d30fce8a5368be08ab4b17d92d9d2d76c49d61c2a6c94a4105a8d7f2b128 namespace=k8s.io Jul 14 21:50:56.311009 containerd[1433]: time="2025-07-14T21:50:56.311029084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 21:50:56.330986 systemd[1]: Created slice kubepods-burstable-pod1ff05376_81e3_4cca_acdc_14244d51512a.slice - libcontainer container kubepods-burstable-pod1ff05376_81e3_4cca_acdc_14244d51512a.slice. Jul 14 21:50:56.338923 systemd[1]: Created slice kubepods-besteffort-podd8d7425c_d46f_4d09_ac17_c7d1cb41b4a6.slice - libcontainer container kubepods-besteffort-podd8d7425c_d46f_4d09_ac17_c7d1cb41b4a6.slice. Jul 14 21:50:56.345038 systemd[1]: Created slice kubepods-besteffort-pod93233d1f_a8c9_4f31_8a33_73445cc9215c.slice - libcontainer container kubepods-besteffort-pod93233d1f_a8c9_4f31_8a33_73445cc9215c.slice. Jul 14 21:50:56.349832 systemd[1]: Created slice kubepods-burstable-pod21056f12_8439_463a_a28c_d3964e6d90bc.slice - libcontainer container kubepods-burstable-pod21056f12_8439_463a_a28c_d3964e6d90bc.slice. Jul 14 21:50:56.357903 systemd[1]: Created slice kubepods-besteffort-podd8e339a0_f1ef_4377_b419_e88422d3110e.slice - libcontainer container kubepods-besteffort-podd8e339a0_f1ef_4377_b419_e88422d3110e.slice. Jul 14 21:50:56.366206 systemd[1]: Created slice kubepods-besteffort-pod8792d4f6_4ad7_4255_a985_d0b68c2e01d9.slice - libcontainer container kubepods-besteffort-pod8792d4f6_4ad7_4255_a985_d0b68c2e01d9.slice. Jul 14 21:50:56.369876 systemd[1]: Created slice kubepods-besteffort-podb4b62d59_0160_4a23_8603_079ff6a4f14c.slice - libcontainer container kubepods-besteffort-podb4b62d59_0160_4a23_8603_079ff6a4f14c.slice. Jul 14 21:50:56.384422 kubelet[2442]: I0714 21:50:56.383976 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgt6m\" (UniqueName: \"kubernetes.io/projected/93233d1f-a8c9-4f31-8a33-73445cc9215c-kube-api-access-wgt6m\") pod \"calico-apiserver-5ddcff449d-6w6dr\" (UID: \"93233d1f-a8c9-4f31-8a33-73445cc9215c\") " pod="calico-apiserver/calico-apiserver-5ddcff449d-6w6dr" Jul 14 21:50:56.384422 kubelet[2442]: I0714 21:50:56.384015 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxlj\" (UniqueName: \"kubernetes.io/projected/b4b62d59-0160-4a23-8603-079ff6a4f14c-kube-api-access-wzxlj\") pod \"goldmane-768f4c5c69-jcm2t\" (UID: \"b4b62d59-0160-4a23-8603-079ff6a4f14c\") " pod="calico-system/goldmane-768f4c5c69-jcm2t" Jul 14 21:50:56.384422 kubelet[2442]: I0714 21:50:56.384037 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21056f12-8439-463a-a28c-d3964e6d90bc-config-volume\") pod \"coredns-668d6bf9bc-r2zgm\" (UID: \"21056f12-8439-463a-a28c-d3964e6d90bc\") " pod="kube-system/coredns-668d6bf9bc-r2zgm" Jul 14 21:50:56.384422 kubelet[2442]: I0714 21:50:56.384054 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzpmw\" (UniqueName: \"kubernetes.io/projected/21056f12-8439-463a-a28c-d3964e6d90bc-kube-api-access-bzpmw\") pod \"coredns-668d6bf9bc-r2zgm\" (UID: \"21056f12-8439-463a-a28c-d3964e6d90bc\") " pod="kube-system/coredns-668d6bf9bc-r2zgm" Jul 14 21:50:56.384422 kubelet[2442]: I0714 21:50:56.384072 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b4b62d59-0160-4a23-8603-079ff6a4f14c-config\") pod \"goldmane-768f4c5c69-jcm2t\" (UID: \"b4b62d59-0160-4a23-8603-079ff6a4f14c\") " pod="calico-system/goldmane-768f4c5c69-jcm2t" Jul 14 21:50:56.384660 kubelet[2442]: I0714 21:50:56.384088 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp89m\" (UniqueName: \"kubernetes.io/projected/d8e339a0-f1ef-4377-b419-e88422d3110e-kube-api-access-zp89m\") pod \"whisker-7774d4d645-rqsfv\" (UID: \"d8e339a0-f1ef-4377-b419-e88422d3110e\") " pod="calico-system/whisker-7774d4d645-rqsfv" Jul 14 21:50:56.384660 kubelet[2442]: I0714 21:50:56.384104 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8792d4f6-4ad7-4255-a985-d0b68c2e01d9-tigera-ca-bundle\") pod \"calico-kube-controllers-5fcd7777df-hlm2l\" (UID: \"8792d4f6-4ad7-4255-a985-d0b68c2e01d9\") " pod="calico-system/calico-kube-controllers-5fcd7777df-hlm2l" Jul 14 21:50:56.384660 kubelet[2442]: I0714 21:50:56.384123 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8e339a0-f1ef-4377-b419-e88422d3110e-whisker-ca-bundle\") pod \"whisker-7774d4d645-rqsfv\" (UID: \"d8e339a0-f1ef-4377-b419-e88422d3110e\") " pod="calico-system/whisker-7774d4d645-rqsfv" Jul 14 21:50:56.384660 kubelet[2442]: I0714 21:50:56.384140 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1ff05376-81e3-4cca-acdc-14244d51512a-config-volume\") pod \"coredns-668d6bf9bc-82m46\" (UID: \"1ff05376-81e3-4cca-acdc-14244d51512a\") " pod="kube-system/coredns-668d6bf9bc-82m46" Jul 14 21:50:56.384660 kubelet[2442]: I0714 21:50:56.384157 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/93233d1f-a8c9-4f31-8a33-73445cc9215c-calico-apiserver-certs\") pod \"calico-apiserver-5ddcff449d-6w6dr\" (UID: \"93233d1f-a8c9-4f31-8a33-73445cc9215c\") " pod="calico-apiserver/calico-apiserver-5ddcff449d-6w6dr" Jul 14 21:50:56.384764 kubelet[2442]: I0714 21:50:56.384172 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4b62d59-0160-4a23-8603-079ff6a4f14c-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-jcm2t\" (UID: \"b4b62d59-0160-4a23-8603-079ff6a4f14c\") " pod="calico-system/goldmane-768f4c5c69-jcm2t" Jul 14 21:50:56.384764 kubelet[2442]: I0714 21:50:56.384186 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b4b62d59-0160-4a23-8603-079ff6a4f14c-goldmane-key-pair\") pod \"goldmane-768f4c5c69-jcm2t\" (UID: \"b4b62d59-0160-4a23-8603-079ff6a4f14c\") " pod="calico-system/goldmane-768f4c5c69-jcm2t" Jul 14 21:50:56.384764 kubelet[2442]: I0714 21:50:56.384201 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8e339a0-f1ef-4377-b419-e88422d3110e-whisker-backend-key-pair\") pod \"whisker-7774d4d645-rqsfv\" (UID: \"d8e339a0-f1ef-4377-b419-e88422d3110e\") " pod="calico-system/whisker-7774d4d645-rqsfv" Jul 14 21:50:56.384764 kubelet[2442]: I0714 21:50:56.384228 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgbq2\" (UniqueName: \"kubernetes.io/projected/d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6-kube-api-access-mgbq2\") pod \"calico-apiserver-5ddcff449d-zw4fd\" (UID: \"d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6\") " pod="calico-apiserver/calico-apiserver-5ddcff449d-zw4fd" Jul 14 21:50:56.384764 kubelet[2442]: I0714 21:50:56.384248 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6-calico-apiserver-certs\") pod \"calico-apiserver-5ddcff449d-zw4fd\" (UID: \"d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6\") " pod="calico-apiserver/calico-apiserver-5ddcff449d-zw4fd" Jul 14 21:50:56.384879 kubelet[2442]: I0714 21:50:56.384265 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw7kb\" (UniqueName: \"kubernetes.io/projected/1ff05376-81e3-4cca-acdc-14244d51512a-kube-api-access-cw7kb\") pod \"coredns-668d6bf9bc-82m46\" (UID: \"1ff05376-81e3-4cca-acdc-14244d51512a\") " pod="kube-system/coredns-668d6bf9bc-82m46" Jul 14 21:50:56.384879 kubelet[2442]: I0714 21:50:56.384282 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6b6\" (UniqueName: \"kubernetes.io/projected/8792d4f6-4ad7-4255-a985-d0b68c2e01d9-kube-api-access-ql6b6\") pod \"calico-kube-controllers-5fcd7777df-hlm2l\" (UID: \"8792d4f6-4ad7-4255-a985-d0b68c2e01d9\") " pod="calico-system/calico-kube-controllers-5fcd7777df-hlm2l" Jul 14 21:50:56.637272 kubelet[2442]: E0714 21:50:56.637164 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:56.638553 containerd[1433]: time="2025-07-14T21:50:56.638443806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-82m46,Uid:1ff05376-81e3-4cca-acdc-14244d51512a,Namespace:kube-system,Attempt:0,}" Jul 14 21:50:56.641325 containerd[1433]: time="2025-07-14T21:50:56.641294810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddcff449d-zw4fd,Uid:d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6,Namespace:calico-apiserver,Attempt:0,}" Jul 14 21:50:56.648148 containerd[1433]: time="2025-07-14T21:50:56.647933218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddcff449d-6w6dr,Uid:93233d1f-a8c9-4f31-8a33-73445cc9215c,Namespace:calico-apiserver,Attempt:0,}" Jul 14 21:50:56.653564 kubelet[2442]: E0714 21:50:56.653515 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:50:56.654177 containerd[1433]: time="2025-07-14T21:50:56.654145265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r2zgm,Uid:21056f12-8439-463a-a28c-d3964e6d90bc,Namespace:kube-system,Attempt:0,}" Jul 14 21:50:56.664502 containerd[1433]: time="2025-07-14T21:50:56.664456158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7774d4d645-rqsfv,Uid:d8e339a0-f1ef-4377-b419-e88422d3110e,Namespace:calico-system,Attempt:0,}" Jul 14 21:50:56.669205 containerd[1433]: time="2025-07-14T21:50:56.669163964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcd7777df-hlm2l,Uid:8792d4f6-4ad7-4255-a985-d0b68c2e01d9,Namespace:calico-system,Attempt:0,}" Jul 14 21:50:56.673879 containerd[1433]: time="2025-07-14T21:50:56.673844890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jcm2t,Uid:b4b62d59-0160-4a23-8603-079ff6a4f14c,Namespace:calico-system,Attempt:0,}" Jul 14 21:50:56.997594 systemd[1]: Created slice kubepods-besteffort-podc6dffae0_199e_4860_b66d_240601db16b1.slice - libcontainer container kubepods-besteffort-podc6dffae0_199e_4860_b66d_240601db16b1.slice. Jul 14 21:50:57.000656 containerd[1433]: time="2025-07-14T21:50:57.000281891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9n94f,Uid:c6dffae0-199e-4860-b66d-240601db16b1,Namespace:calico-system,Attempt:0,}" Jul 14 21:50:57.001061 containerd[1433]: time="2025-07-14T21:50:57.001019131Z" level=error msg="Failed to destroy network for sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.002736 containerd[1433]: time="2025-07-14T21:50:57.002692173Z" level=error msg="encountered an error cleaning up failed sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.002873 containerd[1433]: time="2025-07-14T21:50:57.002844974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddcff449d-zw4fd,Uid:d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.003829 kubelet[2442]: E0714 21:50:57.003796 2442 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.004719 containerd[1433]: time="2025-07-14T21:50:57.004683336Z" level=error msg="Failed to destroy network for sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.006003 containerd[1433]: time="2025-07-14T21:50:57.005966737Z" level=error msg="encountered an error cleaning up failed sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.006159 containerd[1433]: time="2025-07-14T21:50:57.006135017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r2zgm,Uid:21056f12-8439-463a-a28c-d3964e6d90bc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.006884 kubelet[2442]: E0714 21:50:57.006831 2442 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddcff449d-zw4fd" Jul 14 21:50:57.006961 kubelet[2442]: E0714 21:50:57.006885 2442 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddcff449d-zw4fd" Jul 14 21:50:57.006992 kubelet[2442]: E0714 21:50:57.006947 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddcff449d-zw4fd_calico-apiserver(d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddcff449d-zw4fd_calico-apiserver(d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddcff449d-zw4fd" podUID="d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6" Jul 14 21:50:57.009796 containerd[1433]: time="2025-07-14T21:50:57.008517860Z" level=error msg="Failed to destroy network for sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.009957 kubelet[2442]: E0714 21:50:57.009926 2442 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.010004 kubelet[2442]: E0714 21:50:57.009986 2442 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-r2zgm" Jul 14 21:50:57.010031 kubelet[2442]: E0714 21:50:57.010006 2442 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-r2zgm" Jul 14 21:50:57.010182 kubelet[2442]: E0714 21:50:57.010122 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-r2zgm_kube-system(21056f12-8439-463a-a28c-d3964e6d90bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-r2zgm_kube-system(21056f12-8439-463a-a28c-d3964e6d90bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-r2zgm" podUID="21056f12-8439-463a-a28c-d3964e6d90bc" Jul 14 21:50:57.010411 containerd[1433]: time="2025-07-14T21:50:57.010313342Z" level=error msg="Failed to destroy network for sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.010828 containerd[1433]: time="2025-07-14T21:50:57.010792383Z" level=error msg="encountered an error cleaning up failed sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.010950 containerd[1433]: time="2025-07-14T21:50:57.010841943Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcd7777df-hlm2l,Uid:8792d4f6-4ad7-4255-a985-d0b68c2e01d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.011121 kubelet[2442]: E0714 21:50:57.011045 2442 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.011121 kubelet[2442]: E0714 21:50:57.011091 2442 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fcd7777df-hlm2l" Jul 14 21:50:57.011121 kubelet[2442]: E0714 21:50:57.011107 2442 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fcd7777df-hlm2l" Jul 14 21:50:57.011255 kubelet[2442]: E0714 21:50:57.011141 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fcd7777df-hlm2l_calico-system(8792d4f6-4ad7-4255-a985-d0b68c2e01d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fcd7777df-hlm2l_calico-system(8792d4f6-4ad7-4255-a985-d0b68c2e01d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fcd7777df-hlm2l" podUID="8792d4f6-4ad7-4255-a985-d0b68c2e01d9" Jul 14 21:50:57.011586 containerd[1433]: time="2025-07-14T21:50:57.011424743Z" level=error msg="encountered an error cleaning up failed sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.011586 containerd[1433]: time="2025-07-14T21:50:57.011477944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7774d4d645-rqsfv,Uid:d8e339a0-f1ef-4377-b419-e88422d3110e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.011741 kubelet[2442]: E0714 21:50:57.011677 2442 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.011741 kubelet[2442]: E0714 21:50:57.011717 2442 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7774d4d645-rqsfv" Jul 14 21:50:57.011741 kubelet[2442]: E0714 21:50:57.011733 2442 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7774d4d645-rqsfv" Jul 14 21:50:57.011822 kubelet[2442]: E0714 21:50:57.011767 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7774d4d645-rqsfv_calico-system(d8e339a0-f1ef-4377-b419-e88422d3110e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7774d4d645-rqsfv_calico-system(d8e339a0-f1ef-4377-b419-e88422d3110e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7774d4d645-rqsfv" podUID="d8e339a0-f1ef-4377-b419-e88422d3110e" Jul 14 21:50:57.033386 containerd[1433]: time="2025-07-14T21:50:57.033324929Z" level=error msg="Failed to destroy network for sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.033802 containerd[1433]: time="2025-07-14T21:50:57.033767289Z" level=error msg="encountered an error cleaning up failed sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.033849 containerd[1433]: time="2025-07-14T21:50:57.033822089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddcff449d-6w6dr,Uid:93233d1f-a8c9-4f31-8a33-73445cc9215c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.033993 containerd[1433]: time="2025-07-14T21:50:57.033964369Z" level=error msg="Failed to destroy network for sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.034091 kubelet[2442]: E0714 21:50:57.034052 2442 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.034153 kubelet[2442]: E0714 21:50:57.034117 2442 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddcff449d-6w6dr" Jul 14 21:50:57.034153 kubelet[2442]: E0714 21:50:57.034136 2442 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddcff449d-6w6dr" Jul 14 21:50:57.034208 kubelet[2442]: E0714 21:50:57.034184 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddcff449d-6w6dr_calico-apiserver(93233d1f-a8c9-4f31-8a33-73445cc9215c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddcff449d-6w6dr_calico-apiserver(93233d1f-a8c9-4f31-8a33-73445cc9215c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddcff449d-6w6dr" podUID="93233d1f-a8c9-4f31-8a33-73445cc9215c" Jul 14 21:50:57.034253 containerd[1433]: time="2025-07-14T21:50:57.034198570Z" level=error msg="encountered an error cleaning up failed sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.034285 containerd[1433]: time="2025-07-14T21:50:57.034244730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jcm2t,Uid:b4b62d59-0160-4a23-8603-079ff6a4f14c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.034520 kubelet[2442]: E0714 21:50:57.034396 2442 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.034696 kubelet[2442]: E0714 21:50:57.034447 2442 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-jcm2t" Jul 14 21:50:57.034696 kubelet[2442]: E0714 21:50:57.034615 2442 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-jcm2t" Jul 14 21:50:57.034696 kubelet[2442]: E0714 21:50:57.034652 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-jcm2t_calico-system(b4b62d59-0160-4a23-8603-079ff6a4f14c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-jcm2t_calico-system(b4b62d59-0160-4a23-8603-079ff6a4f14c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-jcm2t" podUID="b4b62d59-0160-4a23-8603-079ff6a4f14c" Jul 14 21:50:57.039144 containerd[1433]: time="2025-07-14T21:50:57.039098535Z" level=error msg="Failed to destroy network for sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.039476 containerd[1433]: time="2025-07-14T21:50:57.039428336Z" level=error msg="encountered an error cleaning up failed sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.039511 containerd[1433]: time="2025-07-14T21:50:57.039487456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-82m46,Uid:1ff05376-81e3-4cca-acdc-14244d51512a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.039774 kubelet[2442]: E0714 21:50:57.039736 2442 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.039941 kubelet[2442]: E0714 21:50:57.039878 2442 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-82m46" Jul 14 21:50:57.039941 kubelet[2442]: E0714 21:50:57.039900 2442 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-82m46" Jul 14 21:50:57.040113 kubelet[2442]: E0714 21:50:57.040058 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-82m46_kube-system(1ff05376-81e3-4cca-acdc-14244d51512a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-82m46_kube-system(1ff05376-81e3-4cca-acdc-14244d51512a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-82m46" podUID="1ff05376-81e3-4cca-acdc-14244d51512a" Jul 14 21:50:57.071423 containerd[1433]: time="2025-07-14T21:50:57.071288332Z" level=error msg="Failed to destroy network for sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.071782 containerd[1433]: time="2025-07-14T21:50:57.071751533Z" level=error msg="encountered an error cleaning up failed sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.071998 containerd[1433]: time="2025-07-14T21:50:57.071888453Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9n94f,Uid:c6dffae0-199e-4860-b66d-240601db16b1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.072174 kubelet[2442]: E0714 21:50:57.072134 2442 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.072234 kubelet[2442]: E0714 21:50:57.072197 2442 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9n94f" Jul 14 21:50:57.072234 kubelet[2442]: E0714 21:50:57.072218 2442 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9n94f" Jul 14 21:50:57.072347 kubelet[2442]: E0714 21:50:57.072263 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9n94f_calico-system(c6dffae0-199e-4860-b66d-240601db16b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9n94f_calico-system(c6dffae0-199e-4860-b66d-240601db16b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9n94f" podUID="c6dffae0-199e-4860-b66d-240601db16b1" Jul 14 21:50:57.128375 containerd[1433]: time="2025-07-14T21:50:57.127930558Z" level=info msg="StopPodSandbox for \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\"" Jul 14 21:50:57.128375 containerd[1433]: time="2025-07-14T21:50:57.128090678Z" level=info msg="Ensure that sandbox 6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003 in task-service has been cleanup successfully" Jul 14 21:50:57.130700 containerd[1433]: time="2025-07-14T21:50:57.130666641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 21:50:57.132288 kubelet[2442]: I0714 21:50:57.132187 2442 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:50:57.132683 kubelet[2442]: I0714 21:50:57.132594 2442 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:50:57.133744 containerd[1433]: time="2025-07-14T21:50:57.133706444Z" level=info msg="StopPodSandbox for \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\"" Jul 14 21:50:57.134207 containerd[1433]: time="2025-07-14T21:50:57.134161605Z" level=info msg="Ensure that sandbox ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e in task-service has been cleanup successfully" Jul 14 21:50:57.136499 kubelet[2442]: I0714 21:50:57.136262 2442 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:50:57.138752 containerd[1433]: time="2025-07-14T21:50:57.138724650Z" level=info msg="StopPodSandbox for \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\"" Jul 14 21:50:57.139253 kubelet[2442]: I0714 21:50:57.138970 2442 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:50:57.139327 containerd[1433]: time="2025-07-14T21:50:57.139150451Z" level=info msg="Ensure that sandbox cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46 in task-service has been cleanup successfully" Jul 14 21:50:57.139765 containerd[1433]: time="2025-07-14T21:50:57.139727691Z" level=info msg="StopPodSandbox for \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\"" Jul 14 21:50:57.140504 kubelet[2442]: I0714 21:50:57.140461 2442 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:50:57.141045 containerd[1433]: time="2025-07-14T21:50:57.140290332Z" level=info msg="Ensure that sandbox 20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e in task-service has been cleanup successfully" Jul 14 21:50:57.141786 containerd[1433]: time="2025-07-14T21:50:57.141660973Z" level=info msg="StopPodSandbox for \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\"" Jul 14 21:50:57.141856 containerd[1433]: time="2025-07-14T21:50:57.141809414Z" level=info msg="Ensure that sandbox 68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681 in task-service has been cleanup successfully" Jul 14 21:50:57.144853 kubelet[2442]: I0714 21:50:57.144816 2442 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:50:57.145530 containerd[1433]: time="2025-07-14T21:50:57.145472658Z" level=info msg="StopPodSandbox for \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\"" Jul 14 21:50:57.145831 containerd[1433]: time="2025-07-14T21:50:57.145793138Z" level=info msg="Ensure that sandbox adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd in task-service has been cleanup successfully" Jul 14 21:50:57.148583 kubelet[2442]: I0714 21:50:57.148532 2442 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:50:57.150425 kubelet[2442]: I0714 21:50:57.150381 2442 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:50:57.154586 containerd[1433]: time="2025-07-14T21:50:57.154529548Z" level=info msg="StopPodSandbox for \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\"" Jul 14 21:50:57.154881 containerd[1433]: time="2025-07-14T21:50:57.154669668Z" level=info msg="StopPodSandbox for \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\"" Jul 14 21:50:57.154962 containerd[1433]: time="2025-07-14T21:50:57.154903109Z" level=info msg="Ensure that sandbox 4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a in task-service has been cleanup successfully" Jul 14 21:50:57.154990 containerd[1433]: time="2025-07-14T21:50:57.154958709Z" level=info msg="Ensure that sandbox b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c in task-service has been cleanup successfully" Jul 14 21:50:57.214869 containerd[1433]: time="2025-07-14T21:50:57.214814538Z" level=error msg="StopPodSandbox for \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\" failed" error="failed to destroy network for sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.215184 kubelet[2442]: E0714 21:50:57.215057 2442 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:50:57.219608 kubelet[2442]: E0714 21:50:57.219372 2442 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003"} Jul 14 21:50:57.219783 kubelet[2442]: E0714 21:50:57.219629 2442 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1ff05376-81e3-4cca-acdc-14244d51512a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:50:57.219783 kubelet[2442]: E0714 21:50:57.219659 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1ff05376-81e3-4cca-acdc-14244d51512a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-82m46" podUID="1ff05376-81e3-4cca-acdc-14244d51512a" Jul 14 21:50:57.221086 containerd[1433]: time="2025-07-14T21:50:57.221027225Z" level=error msg="StopPodSandbox for \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\" failed" error="failed to destroy network for sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.221476 kubelet[2442]: E0714 21:50:57.221428 2442 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:50:57.221534 kubelet[2442]: E0714 21:50:57.221482 2442 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e"} Jul 14 21:50:57.221534 kubelet[2442]: E0714 21:50:57.221508 2442 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b4b62d59-0160-4a23-8603-079ff6a4f14c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:50:57.221534 kubelet[2442]: E0714 21:50:57.221527 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b4b62d59-0160-4a23-8603-079ff6a4f14c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-jcm2t" podUID="b4b62d59-0160-4a23-8603-079ff6a4f14c" Jul 14 21:50:57.224549 containerd[1433]: time="2025-07-14T21:50:57.224493549Z" level=error msg="StopPodSandbox for \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\" failed" error="failed to destroy network for sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.224714 containerd[1433]: time="2025-07-14T21:50:57.224681549Z" level=error msg="StopPodSandbox for \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\" failed" error="failed to destroy network for sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.224760 kubelet[2442]: E0714 21:50:57.224680 2442 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:50:57.224760 kubelet[2442]: E0714 21:50:57.224716 2442 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681"} Jul 14 21:50:57.224760 kubelet[2442]: E0714 21:50:57.224753 2442 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c6dffae0-199e-4860-b66d-240601db16b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:50:57.224877 kubelet[2442]: E0714 21:50:57.224772 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c6dffae0-199e-4860-b66d-240601db16b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9n94f" podUID="c6dffae0-199e-4860-b66d-240601db16b1" Jul 14 21:50:57.224913 kubelet[2442]: E0714 21:50:57.224842 2442 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:50:57.224913 kubelet[2442]: E0714 21:50:57.224907 2442 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c"} Jul 14 21:50:57.224960 kubelet[2442]: E0714 21:50:57.224925 2442 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:50:57.224960 kubelet[2442]: E0714 21:50:57.224941 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddcff449d-zw4fd" podUID="d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6" Jul 14 21:50:57.229488 containerd[1433]: time="2025-07-14T21:50:57.228871154Z" level=error msg="StopPodSandbox for \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\" failed" error="failed to destroy network for sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.229584 kubelet[2442]: E0714 21:50:57.229034 2442 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:50:57.229584 kubelet[2442]: E0714 21:50:57.229063 2442 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a"} Jul 14 21:50:57.229584 kubelet[2442]: E0714 21:50:57.229084 2442 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"93233d1f-a8c9-4f31-8a33-73445cc9215c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:50:57.229584 kubelet[2442]: E0714 21:50:57.229101 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"93233d1f-a8c9-4f31-8a33-73445cc9215c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddcff449d-6w6dr" podUID="93233d1f-a8c9-4f31-8a33-73445cc9215c" Jul 14 21:50:57.230998 containerd[1433]: time="2025-07-14T21:50:57.230963236Z" level=error msg="StopPodSandbox for \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\" failed" error="failed to destroy network for sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.231261 kubelet[2442]: E0714 21:50:57.231202 2442 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:50:57.231261 kubelet[2442]: E0714 21:50:57.231257 2442 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e"} Jul 14 21:50:57.231351 kubelet[2442]: E0714 21:50:57.231280 2442 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21056f12-8439-463a-a28c-d3964e6d90bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:50:57.231351 kubelet[2442]: E0714 21:50:57.231296 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21056f12-8439-463a-a28c-d3964e6d90bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-r2zgm" podUID="21056f12-8439-463a-a28c-d3964e6d90bc" Jul 14 21:50:57.239630 containerd[1433]: time="2025-07-14T21:50:57.239514366Z" level=error msg="StopPodSandbox for \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\" failed" error="failed to destroy network for sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.239742 kubelet[2442]: E0714 21:50:57.239706 2442 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:50:57.239857 kubelet[2442]: E0714 21:50:57.239744 2442 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46"} Jul 14 21:50:57.239857 kubelet[2442]: E0714 21:50:57.239786 2442 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8792d4f6-4ad7-4255-a985-d0b68c2e01d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:50:57.239934 kubelet[2442]: E0714 21:50:57.239865 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8792d4f6-4ad7-4255-a985-d0b68c2e01d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fcd7777df-hlm2l" podUID="8792d4f6-4ad7-4255-a985-d0b68c2e01d9" Jul 14 21:50:57.240948 containerd[1433]: time="2025-07-14T21:50:57.240868848Z" level=error msg="StopPodSandbox for \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\" failed" error="failed to destroy network for sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 21:50:57.241048 kubelet[2442]: E0714 21:50:57.241019 2442 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:50:57.241087 kubelet[2442]: E0714 21:50:57.241051 2442 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd"} Jul 14 21:50:57.241087 kubelet[2442]: E0714 21:50:57.241081 2442 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8e339a0-f1ef-4377-b419-e88422d3110e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 21:50:57.241150 kubelet[2442]: E0714 21:50:57.241098 2442 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8e339a0-f1ef-4377-b419-e88422d3110e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7774d4d645-rqsfv" podUID="d8e339a0-f1ef-4377-b419-e88422d3110e" Jul 14 21:51:01.263196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3946534338.mount: Deactivated successfully. Jul 14 21:51:01.565827 containerd[1433]: time="2025-07-14T21:51:01.565611666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:01.566324 containerd[1433]: time="2025-07-14T21:51:01.566280667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 14 21:51:01.567386 containerd[1433]: time="2025-07-14T21:51:01.567335108Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:01.569708 containerd[1433]: time="2025-07-14T21:51:01.569653990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:01.570438 containerd[1433]: time="2025-07-14T21:51:01.570380310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.439670909s" Jul 14 21:51:01.570438 containerd[1433]: time="2025-07-14T21:51:01.570429590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 14 21:51:01.582407 containerd[1433]: time="2025-07-14T21:51:01.582332681Z" level=info msg="CreateContainer within sandbox \"7f1279c9aa746f3bfb5bab09889b28742d5a4a4882e601d7febfacc215b42fac\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 21:51:01.666764 containerd[1433]: time="2025-07-14T21:51:01.666695116Z" level=info msg="CreateContainer within sandbox \"7f1279c9aa746f3bfb5bab09889b28742d5a4a4882e601d7febfacc215b42fac\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"757828062e84773abc572e26d37f61e3c282305a461a17d5b4ee31188e4ff64e\"" Jul 14 21:51:01.667486 containerd[1433]: time="2025-07-14T21:51:01.667249357Z" level=info msg="StartContainer for \"757828062e84773abc572e26d37f61e3c282305a461a17d5b4ee31188e4ff64e\"" Jul 14 21:51:01.729793 systemd[1]: Started cri-containerd-757828062e84773abc572e26d37f61e3c282305a461a17d5b4ee31188e4ff64e.scope - libcontainer container 757828062e84773abc572e26d37f61e3c282305a461a17d5b4ee31188e4ff64e. Jul 14 21:51:01.937903 containerd[1433]: time="2025-07-14T21:51:01.937776477Z" level=info msg="StartContainer for \"757828062e84773abc572e26d37f61e3c282305a461a17d5b4ee31188e4ff64e\" returns successfully" Jul 14 21:51:02.015725 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 21:51:02.016744 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 21:51:02.145428 containerd[1433]: time="2025-07-14T21:51:02.145373214Z" level=info msg="StopPodSandbox for \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\"" Jul 14 21:51:02.314894 kubelet[2442]: I0714 21:51:02.314687 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9jxcm" podStartSLOduration=1.9363229720000001 podStartE2EDuration="14.314665875s" podCreationTimestamp="2025-07-14 21:50:48 +0000 UTC" firstStartedPulling="2025-07-14 21:50:49.192710048 +0000 UTC m=+18.314612871" lastFinishedPulling="2025-07-14 21:51:01.571052951 +0000 UTC m=+30.692955774" observedRunningTime="2025-07-14 21:51:02.191884853 +0000 UTC m=+31.313787676" watchObservedRunningTime="2025-07-14 21:51:02.314665875 +0000 UTC m=+31.436568698" Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.311 [INFO][3772] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.312 [INFO][3772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" iface="eth0" netns="/var/run/netns/cni-ffd13b28-91ed-377f-3c50-37d7a35d5020" Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.312 [INFO][3772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" iface="eth0" netns="/var/run/netns/cni-ffd13b28-91ed-377f-3c50-37d7a35d5020" Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.316 [INFO][3772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" iface="eth0" netns="/var/run/netns/cni-ffd13b28-91ed-377f-3c50-37d7a35d5020" Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.316 [INFO][3772] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.319 [INFO][3772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.454 [INFO][3802] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" HandleID="k8s-pod-network.adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Workload="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.454 [INFO][3802] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.454 [INFO][3802] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.464 [WARNING][3802] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" HandleID="k8s-pod-network.adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Workload="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.464 [INFO][3802] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" HandleID="k8s-pod-network.adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Workload="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.465 [INFO][3802] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:02.470144 containerd[1433]: 2025-07-14 21:51:02.468 [INFO][3772] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:02.470633 containerd[1433]: time="2025-07-14T21:51:02.470282845Z" level=info msg="TearDown network for sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\" successfully" Jul 14 21:51:02.470633 containerd[1433]: time="2025-07-14T21:51:02.470310765Z" level=info msg="StopPodSandbox for \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\" returns successfully" Jul 14 21:51:02.472460 systemd[1]: run-netns-cni\x2dffd13b28\x2d91ed\x2d377f\x2d3c50\x2d37d7a35d5020.mount: Deactivated successfully. Jul 14 21:51:02.531082 kubelet[2442]: I0714 21:51:02.530722 2442 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8e339a0-f1ef-4377-b419-e88422d3110e-whisker-ca-bundle\") pod \"d8e339a0-f1ef-4377-b419-e88422d3110e\" (UID: \"d8e339a0-f1ef-4377-b419-e88422d3110e\") " Jul 14 21:51:02.531082 kubelet[2442]: I0714 21:51:02.530783 2442 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8e339a0-f1ef-4377-b419-e88422d3110e-whisker-backend-key-pair\") pod \"d8e339a0-f1ef-4377-b419-e88422d3110e\" (UID: \"d8e339a0-f1ef-4377-b419-e88422d3110e\") " Jul 14 21:51:02.531082 kubelet[2442]: I0714 21:51:02.530812 2442 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp89m\" (UniqueName: \"kubernetes.io/projected/d8e339a0-f1ef-4377-b419-e88422d3110e-kube-api-access-zp89m\") pod \"d8e339a0-f1ef-4377-b419-e88422d3110e\" (UID: \"d8e339a0-f1ef-4377-b419-e88422d3110e\") " Jul 14 21:51:02.537643 kubelet[2442]: I0714 21:51:02.537031 2442 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8e339a0-f1ef-4377-b419-e88422d3110e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d8e339a0-f1ef-4377-b419-e88422d3110e" (UID: "d8e339a0-f1ef-4377-b419-e88422d3110e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 21:51:02.538360 kubelet[2442]: I0714 21:51:02.538325 2442 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e339a0-f1ef-4377-b419-e88422d3110e-kube-api-access-zp89m" (OuterVolumeSpecName: "kube-api-access-zp89m") pod "d8e339a0-f1ef-4377-b419-e88422d3110e" (UID: "d8e339a0-f1ef-4377-b419-e88422d3110e"). InnerVolumeSpecName "kube-api-access-zp89m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 21:51:02.539844 systemd[1]: var-lib-kubelet-pods-d8e339a0\x2df1ef\x2d4377\x2db419\x2de88422d3110e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzp89m.mount: Deactivated successfully. Jul 14 21:51:02.539953 systemd[1]: var-lib-kubelet-pods-d8e339a0\x2df1ef\x2d4377\x2db419\x2de88422d3110e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 21:51:02.541239 kubelet[2442]: I0714 21:51:02.541198 2442 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e339a0-f1ef-4377-b419-e88422d3110e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d8e339a0-f1ef-4377-b419-e88422d3110e" (UID: "d8e339a0-f1ef-4377-b419-e88422d3110e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 21:51:02.632092 kubelet[2442]: I0714 21:51:02.631307 2442 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zp89m\" (UniqueName: \"kubernetes.io/projected/d8e339a0-f1ef-4377-b419-e88422d3110e-kube-api-access-zp89m\") on node \"localhost\" DevicePath \"\"" Jul 14 21:51:02.632092 kubelet[2442]: I0714 21:51:02.631349 2442 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8e339a0-f1ef-4377-b419-e88422d3110e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 21:51:02.632092 kubelet[2442]: I0714 21:51:02.631359 2442 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d8e339a0-f1ef-4377-b419-e88422d3110e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 21:51:02.999201 systemd[1]: Removed slice kubepods-besteffort-podd8e339a0_f1ef_4377_b419_e88422d3110e.slice - libcontainer container kubepods-besteffort-podd8e339a0_f1ef_4377_b419_e88422d3110e.slice. Jul 14 21:51:03.234661 kubelet[2442]: I0714 21:51:03.234069 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c548b\" (UniqueName: \"kubernetes.io/projected/e254447f-6be9-441e-b91b-64a827cb65fb-kube-api-access-c548b\") pod \"whisker-748b49f847-8pkwj\" (UID: \"e254447f-6be9-441e-b91b-64a827cb65fb\") " pod="calico-system/whisker-748b49f847-8pkwj" Jul 14 21:51:03.234661 kubelet[2442]: I0714 21:51:03.234135 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e254447f-6be9-441e-b91b-64a827cb65fb-whisker-backend-key-pair\") pod \"whisker-748b49f847-8pkwj\" (UID: \"e254447f-6be9-441e-b91b-64a827cb65fb\") " pod="calico-system/whisker-748b49f847-8pkwj" Jul 14 21:51:03.234661 kubelet[2442]: I0714 21:51:03.234169 2442 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e254447f-6be9-441e-b91b-64a827cb65fb-whisker-ca-bundle\") pod \"whisker-748b49f847-8pkwj\" (UID: \"e254447f-6be9-441e-b91b-64a827cb65fb\") " pod="calico-system/whisker-748b49f847-8pkwj" Jul 14 21:51:03.237431 systemd[1]: Created slice kubepods-besteffort-pode254447f_6be9_441e_b91b_64a827cb65fb.slice - libcontainer container kubepods-besteffort-pode254447f_6be9_441e_b91b_64a827cb65fb.slice. Jul 14 21:51:03.541574 containerd[1433]: time="2025-07-14T21:51:03.541523550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-748b49f847-8pkwj,Uid:e254447f-6be9-441e-b91b-64a827cb65fb,Namespace:calico-system,Attempt:0,}" Jul 14 21:51:03.694663 kernel: bpftool[4001]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 21:51:03.729033 systemd-networkd[1364]: cali491d23d94f5: Link UP Jul 14 21:51:03.729215 systemd-networkd[1364]: cali491d23d94f5: Gained carrier Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.610 [INFO][3949] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.630 [INFO][3949] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--748b49f847--8pkwj-eth0 whisker-748b49f847- calico-system e254447f-6be9-441e-b91b-64a827cb65fb 890 0 2025-07-14 21:51:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:748b49f847 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-748b49f847-8pkwj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali491d23d94f5 [] [] }} ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Namespace="calico-system" Pod="whisker-748b49f847-8pkwj" WorkloadEndpoint="localhost-k8s-whisker--748b49f847--8pkwj-" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.631 [INFO][3949] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Namespace="calico-system" Pod="whisker-748b49f847-8pkwj" WorkloadEndpoint="localhost-k8s-whisker--748b49f847--8pkwj-eth0" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.664 [INFO][3984] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" HandleID="k8s-pod-network.cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Workload="localhost-k8s-whisker--748b49f847--8pkwj-eth0" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.665 [INFO][3984] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" HandleID="k8s-pod-network.cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Workload="localhost-k8s-whisker--748b49f847--8pkwj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-748b49f847-8pkwj", "timestamp":"2025-07-14 21:51:03.664899727 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.665 [INFO][3984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.665 [INFO][3984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.665 [INFO][3984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.676 [INFO][3984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" host="localhost" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.698 [INFO][3984] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.702 [INFO][3984] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.704 [INFO][3984] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.707 [INFO][3984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.707 [INFO][3984] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" host="localhost" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.708 [INFO][3984] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708 Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.712 [INFO][3984] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" host="localhost" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.717 [INFO][3984] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" host="localhost" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.717 [INFO][3984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" host="localhost" Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.717 [INFO][3984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:03.743317 containerd[1433]: 2025-07-14 21:51:03.717 [INFO][3984] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" HandleID="k8s-pod-network.cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Workload="localhost-k8s-whisker--748b49f847--8pkwj-eth0" Jul 14 21:51:03.743972 containerd[1433]: 2025-07-14 21:51:03.720 [INFO][3949] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Namespace="calico-system" Pod="whisker-748b49f847-8pkwj" WorkloadEndpoint="localhost-k8s-whisker--748b49f847--8pkwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--748b49f847--8pkwj-eth0", GenerateName:"whisker-748b49f847-", Namespace:"calico-system", SelfLink:"", UID:"e254447f-6be9-441e-b91b-64a827cb65fb", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 51, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"748b49f847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-748b49f847-8pkwj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali491d23d94f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:03.743972 containerd[1433]: 2025-07-14 21:51:03.720 [INFO][3949] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Namespace="calico-system" Pod="whisker-748b49f847-8pkwj" WorkloadEndpoint="localhost-k8s-whisker--748b49f847--8pkwj-eth0" Jul 14 21:51:03.743972 containerd[1433]: 2025-07-14 21:51:03.720 [INFO][3949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali491d23d94f5 ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Namespace="calico-system" Pod="whisker-748b49f847-8pkwj" WorkloadEndpoint="localhost-k8s-whisker--748b49f847--8pkwj-eth0" Jul 14 21:51:03.743972 containerd[1433]: 2025-07-14 21:51:03.728 [INFO][3949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Namespace="calico-system" Pod="whisker-748b49f847-8pkwj" WorkloadEndpoint="localhost-k8s-whisker--748b49f847--8pkwj-eth0" Jul 14 21:51:03.743972 containerd[1433]: 2025-07-14 21:51:03.730 [INFO][3949] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Namespace="calico-system" Pod="whisker-748b49f847-8pkwj" WorkloadEndpoint="localhost-k8s-whisker--748b49f847--8pkwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--748b49f847--8pkwj-eth0", GenerateName:"whisker-748b49f847-", Namespace:"calico-system", SelfLink:"", UID:"e254447f-6be9-441e-b91b-64a827cb65fb", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 51, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"748b49f847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708", Pod:"whisker-748b49f847-8pkwj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali491d23d94f5", MAC:"42:26:71:36:10:8b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:03.743972 containerd[1433]: 2025-07-14 21:51:03.739 [INFO][3949] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708" Namespace="calico-system" Pod="whisker-748b49f847-8pkwj" WorkloadEndpoint="localhost-k8s-whisker--748b49f847--8pkwj-eth0" Jul 14 21:51:03.763309 containerd[1433]: time="2025-07-14T21:51:03.763109483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:51:03.763309 containerd[1433]: time="2025-07-14T21:51:03.763282804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:51:03.763309 containerd[1433]: time="2025-07-14T21:51:03.763302644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:03.763493 containerd[1433]: time="2025-07-14T21:51:03.763459844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:03.782800 systemd[1]: Started cri-containerd-cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708.scope - libcontainer container cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708. Jul 14 21:51:03.795024 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:51:03.818209 containerd[1433]: time="2025-07-14T21:51:03.818115086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-748b49f847-8pkwj,Uid:e254447f-6be9-441e-b91b-64a827cb65fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708\"" Jul 14 21:51:03.820719 containerd[1433]: time="2025-07-14T21:51:03.820678448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 21:51:03.880681 systemd-networkd[1364]: vxlan.calico: Link UP Jul 14 21:51:03.880692 systemd-networkd[1364]: vxlan.calico: Gained carrier Jul 14 21:51:04.886970 containerd[1433]: time="2025-07-14T21:51:04.886918439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:04.889474 containerd[1433]: time="2025-07-14T21:51:04.889408721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 14 21:51:04.890587 containerd[1433]: time="2025-07-14T21:51:04.890544322Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:04.892645 containerd[1433]: time="2025-07-14T21:51:04.892614283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:04.893388 containerd[1433]: time="2025-07-14T21:51:04.893336204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.072612316s" Jul 14 21:51:04.893388 containerd[1433]: time="2025-07-14T21:51:04.893379004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 14 21:51:04.896172 containerd[1433]: time="2025-07-14T21:51:04.895958125Z" level=info msg="CreateContainer within sandbox \"cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 21:51:04.911895 containerd[1433]: time="2025-07-14T21:51:04.911838297Z" level=info msg="CreateContainer within sandbox \"cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5ceb0bc4db479fe0b809d4e96f387d595b691d9f99fb46ec1f854383ada07cf0\"" Jul 14 21:51:04.913335 containerd[1433]: time="2025-07-14T21:51:04.912446338Z" level=info msg="StartContainer for \"5ceb0bc4db479fe0b809d4e96f387d595b691d9f99fb46ec1f854383ada07cf0\"" Jul 14 21:51:04.947664 systemd[1]: run-containerd-runc-k8s.io-5ceb0bc4db479fe0b809d4e96f387d595b691d9f99fb46ec1f854383ada07cf0-runc.h9Ghzg.mount: Deactivated successfully. Jul 14 21:51:04.955795 systemd[1]: Started cri-containerd-5ceb0bc4db479fe0b809d4e96f387d595b691d9f99fb46ec1f854383ada07cf0.scope - libcontainer container 5ceb0bc4db479fe0b809d4e96f387d595b691d9f99fb46ec1f854383ada07cf0. Jul 14 21:51:04.989409 containerd[1433]: time="2025-07-14T21:51:04.989282314Z" level=info msg="StartContainer for \"5ceb0bc4db479fe0b809d4e96f387d595b691d9f99fb46ec1f854383ada07cf0\" returns successfully" Jul 14 21:51:04.991650 kubelet[2442]: I0714 21:51:04.991361 2442 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e339a0-f1ef-4377-b419-e88422d3110e" path="/var/lib/kubelet/pods/d8e339a0-f1ef-4377-b419-e88422d3110e/volumes" Jul 14 21:51:04.994805 containerd[1433]: time="2025-07-14T21:51:04.992887757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 21:51:05.770792 systemd-networkd[1364]: cali491d23d94f5: Gained IPv6LL Jul 14 21:51:05.834826 systemd-networkd[1364]: vxlan.calico: Gained IPv6LL Jul 14 21:51:06.656015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1311723404.mount: Deactivated successfully. Jul 14 21:51:06.679036 containerd[1433]: time="2025-07-14T21:51:06.678978687Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:06.680160 containerd[1433]: time="2025-07-14T21:51:06.680115087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 14 21:51:06.681035 containerd[1433]: time="2025-07-14T21:51:06.681006568Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:06.683299 containerd[1433]: time="2025-07-14T21:51:06.683241409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:06.684431 containerd[1433]: time="2025-07-14T21:51:06.684371890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.691445093s" Jul 14 21:51:06.684431 containerd[1433]: time="2025-07-14T21:51:06.684430730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 14 21:51:06.688423 containerd[1433]: time="2025-07-14T21:51:06.688358933Z" level=info msg="CreateContainer within sandbox \"cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 21:51:06.701874 containerd[1433]: time="2025-07-14T21:51:06.701814181Z" level=info msg="CreateContainer within sandbox \"cd9fe5d02c32822329b9bbc7477394363db87b57d28647e5c3d259ff0b24e708\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b68289f74724ea3c641b8176be2b191ac062d0e7e8ac7dcbca5ad9dfe1ce7bc8\"" Jul 14 21:51:06.702606 containerd[1433]: time="2025-07-14T21:51:06.702548982Z" level=info msg="StartContainer for \"b68289f74724ea3c641b8176be2b191ac062d0e7e8ac7dcbca5ad9dfe1ce7bc8\"" Jul 14 21:51:06.732793 systemd[1]: Started cri-containerd-b68289f74724ea3c641b8176be2b191ac062d0e7e8ac7dcbca5ad9dfe1ce7bc8.scope - libcontainer container b68289f74724ea3c641b8176be2b191ac062d0e7e8ac7dcbca5ad9dfe1ce7bc8. Jul 14 21:51:06.784090 containerd[1433]: time="2025-07-14T21:51:06.784035714Z" level=info msg="StartContainer for \"b68289f74724ea3c641b8176be2b191ac062d0e7e8ac7dcbca5ad9dfe1ce7bc8\" returns successfully" Jul 14 21:51:07.199540 kubelet[2442]: I0714 21:51:07.193199 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-748b49f847-8pkwj" podStartSLOduration=1.327598607 podStartE2EDuration="4.19318329s" podCreationTimestamp="2025-07-14 21:51:03 +0000 UTC" firstStartedPulling="2025-07-14 21:51:03.819858408 +0000 UTC m=+32.941761231" lastFinishedPulling="2025-07-14 21:51:06.685443091 +0000 UTC m=+35.807345914" observedRunningTime="2025-07-14 21:51:07.19263921 +0000 UTC m=+36.314542073" watchObservedRunningTime="2025-07-14 21:51:07.19318329 +0000 UTC m=+36.315086113" Jul 14 21:51:07.988604 containerd[1433]: time="2025-07-14T21:51:07.987567690Z" level=info msg="StopPodSandbox for \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\"" Jul 14 21:51:07.988604 containerd[1433]: time="2025-07-14T21:51:07.987683850Z" level=info msg="StopPodSandbox for \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\"" Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.053 [INFO][4263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.054 [INFO][4263] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" iface="eth0" netns="/var/run/netns/cni-4f51c88c-8c6d-a9d6-7e14-2c4a0514d836" Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.054 [INFO][4263] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" iface="eth0" netns="/var/run/netns/cni-4f51c88c-8c6d-a9d6-7e14-2c4a0514d836" Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.055 [INFO][4263] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" iface="eth0" netns="/var/run/netns/cni-4f51c88c-8c6d-a9d6-7e14-2c4a0514d836" Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.055 [INFO][4263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.055 [INFO][4263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.081 [INFO][4279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" HandleID="k8s-pod-network.ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.081 [INFO][4279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.081 [INFO][4279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.093 [WARNING][4279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" HandleID="k8s-pod-network.ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.093 [INFO][4279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" HandleID="k8s-pod-network.ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.094 [INFO][4279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:08.100667 containerd[1433]: 2025-07-14 21:51:08.096 [INFO][4263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:08.101686 containerd[1433]: time="2025-07-14T21:51:08.100811355Z" level=info msg="TearDown network for sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\" successfully" Jul 14 21:51:08.101686 containerd[1433]: time="2025-07-14T21:51:08.100839755Z" level=info msg="StopPodSandbox for \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\" returns successfully" Jul 14 21:51:08.101774 containerd[1433]: time="2025-07-14T21:51:08.101747355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jcm2t,Uid:b4b62d59-0160-4a23-8603-079ff6a4f14c,Namespace:calico-system,Attempt:1,}" Jul 14 21:51:08.103062 systemd[1]: run-netns-cni\x2d4f51c88c\x2d8c6d\x2da9d6\x2d7e14\x2d2c4a0514d836.mount: Deactivated successfully. Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.054 [INFO][4258] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.054 [INFO][4258] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" iface="eth0" netns="/var/run/netns/cni-3f61f163-75c0-d172-7789-f72ed1c307f7" Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.054 [INFO][4258] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" iface="eth0" netns="/var/run/netns/cni-3f61f163-75c0-d172-7789-f72ed1c307f7" Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.056 [INFO][4258] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" iface="eth0" netns="/var/run/netns/cni-3f61f163-75c0-d172-7789-f72ed1c307f7" Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.056 [INFO][4258] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.056 [INFO][4258] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.082 [INFO][4281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" HandleID="k8s-pod-network.6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.082 [INFO][4281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.095 [INFO][4281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.165 [WARNING][4281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" HandleID="k8s-pod-network.6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.166 [INFO][4281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" HandleID="k8s-pod-network.6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.168 [INFO][4281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:08.173774 containerd[1433]: 2025-07-14 21:51:08.170 [INFO][4258] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:08.173774 containerd[1433]: time="2025-07-14T21:51:08.173348396Z" level=info msg="TearDown network for sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\" successfully" Jul 14 21:51:08.173774 containerd[1433]: time="2025-07-14T21:51:08.173385236Z" level=info msg="StopPodSandbox for \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\" returns successfully" Jul 14 21:51:08.174728 kubelet[2442]: E0714 21:51:08.173850 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:51:08.175491 systemd[1]: run-netns-cni\x2d3f61f163\x2d75c0\x2dd172\x2d7789\x2df72ed1c307f7.mount: Deactivated successfully. Jul 14 21:51:08.177759 containerd[1433]: time="2025-07-14T21:51:08.176008317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-82m46,Uid:1ff05376-81e3-4cca-acdc-14244d51512a,Namespace:kube-system,Attempt:1,}" Jul 14 21:51:08.338400 systemd-networkd[1364]: cali082bf03dcbd: Link UP Jul 14 21:51:08.340843 systemd-networkd[1364]: cali082bf03dcbd: Gained carrier Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.240 [INFO][4302] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--82m46-eth0 coredns-668d6bf9bc- kube-system 1ff05376-81e3-4cca-acdc-14244d51512a 925 0 2025-07-14 21:50:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-82m46 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali082bf03dcbd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-82m46" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--82m46-" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.240 [INFO][4302] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-82m46" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.274 [INFO][4325] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" HandleID="k8s-pod-network.113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.274 [INFO][4325] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" HandleID="k8s-pod-network.113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004345e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-82m46", "timestamp":"2025-07-14 21:51:08.274815013 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.275 [INFO][4325] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.275 [INFO][4325] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.275 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.287 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" host="localhost" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.292 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.297 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.299 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.302 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.302 [INFO][4325] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" host="localhost" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.304 [INFO][4325] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0 Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.319 [INFO][4325] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" host="localhost" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.332 [INFO][4325] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" host="localhost" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.332 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" host="localhost" Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.332 [INFO][4325] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:08.370831 containerd[1433]: 2025-07-14 21:51:08.332 [INFO][4325] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" HandleID="k8s-pod-network.113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:08.371534 containerd[1433]: 2025-07-14 21:51:08.334 [INFO][4302] cni-plugin/k8s.go 418: Populated endpoint ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-82m46" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--82m46-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1ff05376-81e3-4cca-acdc-14244d51512a", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-82m46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082bf03dcbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:08.371534 containerd[1433]: 2025-07-14 21:51:08.335 [INFO][4302] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-82m46" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:08.371534 containerd[1433]: 2025-07-14 21:51:08.335 [INFO][4302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali082bf03dcbd ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-82m46" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:08.371534 containerd[1433]: 2025-07-14 21:51:08.339 [INFO][4302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-82m46" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:08.371534 containerd[1433]: 2025-07-14 21:51:08.342 [INFO][4302] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-82m46" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--82m46-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1ff05376-81e3-4cca-acdc-14244d51512a", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0", Pod:"coredns-668d6bf9bc-82m46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082bf03dcbd", MAC:"fe:44:71:01:a7:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:08.371534 containerd[1433]: 2025-07-14 21:51:08.368 [INFO][4302] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0" Namespace="kube-system" Pod="coredns-668d6bf9bc-82m46" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:08.395332 containerd[1433]: time="2025-07-14T21:51:08.395214001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:51:08.395332 containerd[1433]: time="2025-07-14T21:51:08.395325161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:51:08.395513 containerd[1433]: time="2025-07-14T21:51:08.395352161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:08.401930 containerd[1433]: time="2025-07-14T21:51:08.395740442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:08.443280 systemd[1]: Started cri-containerd-113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0.scope - libcontainer container 113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0. Jul 14 21:51:08.445201 systemd-networkd[1364]: calia8de201f017: Link UP Jul 14 21:51:08.445799 systemd-networkd[1364]: calia8de201f017: Gained carrier Jul 14 21:51:08.461299 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.246 [INFO][4309] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0 goldmane-768f4c5c69- calico-system b4b62d59-0160-4a23-8603-079ff6a4f14c 924 0 2025-07-14 21:50:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-jcm2t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia8de201f017 [] [] }} ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Namespace="calico-system" Pod="goldmane-768f4c5c69-jcm2t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jcm2t-" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.246 [INFO][4309] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Namespace="calico-system" Pod="goldmane-768f4c5c69-jcm2t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.281 [INFO][4332] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" HandleID="k8s-pod-network.7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.281 [INFO][4332] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" HandleID="k8s-pod-network.7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000116810), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-jcm2t", "timestamp":"2025-07-14 21:51:08.281311937 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.283 [INFO][4332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.332 [INFO][4332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.332 [INFO][4332] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.389 [INFO][4332] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" host="localhost" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.399 [INFO][4332] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.409 [INFO][4332] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.412 [INFO][4332] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.417 [INFO][4332] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.417 [INFO][4332] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" host="localhost" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.419 [INFO][4332] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.424 [INFO][4332] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" host="localhost" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.433 [INFO][4332] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" host="localhost" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.433 [INFO][4332] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" host="localhost" Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.433 [INFO][4332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:08.467786 containerd[1433]: 2025-07-14 21:51:08.433 [INFO][4332] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" HandleID="k8s-pod-network.7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:08.468660 containerd[1433]: 2025-07-14 21:51:08.438 [INFO][4309] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Namespace="calico-system" Pod="goldmane-768f4c5c69-jcm2t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b4b62d59-0160-4a23-8603-079ff6a4f14c", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-jcm2t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8de201f017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:08.468660 containerd[1433]: 2025-07-14 21:51:08.438 [INFO][4309] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Namespace="calico-system" Pod="goldmane-768f4c5c69-jcm2t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:08.468660 containerd[1433]: 2025-07-14 21:51:08.438 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8de201f017 ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Namespace="calico-system" Pod="goldmane-768f4c5c69-jcm2t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:08.468660 containerd[1433]: 2025-07-14 21:51:08.446 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Namespace="calico-system" Pod="goldmane-768f4c5c69-jcm2t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:08.468660 containerd[1433]: 2025-07-14 21:51:08.446 [INFO][4309] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Namespace="calico-system" Pod="goldmane-768f4c5c69-jcm2t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b4b62d59-0160-4a23-8603-079ff6a4f14c", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f", Pod:"goldmane-768f4c5c69-jcm2t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8de201f017", MAC:"d2:5d:9e:19:67:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:08.468660 containerd[1433]: 2025-07-14 21:51:08.457 [INFO][4309] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f" Namespace="calico-system" Pod="goldmane-768f4c5c69-jcm2t" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:08.485668 containerd[1433]: time="2025-07-14T21:51:08.485494492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-82m46,Uid:1ff05376-81e3-4cca-acdc-14244d51512a,Namespace:kube-system,Attempt:1,} returns sandbox id \"113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0\"" Jul 14 21:51:08.487723 kubelet[2442]: E0714 21:51:08.487119 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:51:08.491626 containerd[1433]: time="2025-07-14T21:51:08.491521816Z" level=info msg="CreateContainer within sandbox \"113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:51:08.506534 containerd[1433]: time="2025-07-14T21:51:08.506251784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:51:08.506534 containerd[1433]: time="2025-07-14T21:51:08.506315024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:51:08.506534 containerd[1433]: time="2025-07-14T21:51:08.506344304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:08.506534 containerd[1433]: time="2025-07-14T21:51:08.506436104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:08.509008 containerd[1433]: time="2025-07-14T21:51:08.508934746Z" level=info msg="CreateContainer within sandbox \"113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"74352d6b632336696d630b115b975706c4af9f6393d45cadf89275bb9c4642e8\"" Jul 14 21:51:08.510169 containerd[1433]: time="2025-07-14T21:51:08.509968626Z" level=info msg="StartContainer for \"74352d6b632336696d630b115b975706c4af9f6393d45cadf89275bb9c4642e8\"" Jul 14 21:51:08.525784 systemd[1]: Started cri-containerd-7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f.scope - libcontainer container 7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f. Jul 14 21:51:08.536767 systemd[1]: Started cri-containerd-74352d6b632336696d630b115b975706c4af9f6393d45cadf89275bb9c4642e8.scope - libcontainer container 74352d6b632336696d630b115b975706c4af9f6393d45cadf89275bb9c4642e8. Jul 14 21:51:08.539840 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:51:08.563680 containerd[1433]: time="2025-07-14T21:51:08.563515697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-jcm2t,Uid:b4b62d59-0160-4a23-8603-079ff6a4f14c,Namespace:calico-system,Attempt:1,} returns sandbox id \"7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f\"" Jul 14 21:51:08.566222 containerd[1433]: time="2025-07-14T21:51:08.566170538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 21:51:08.579839 containerd[1433]: time="2025-07-14T21:51:08.579785106Z" level=info msg="StartContainer for \"74352d6b632336696d630b115b975706c4af9f6393d45cadf89275bb9c4642e8\" returns successfully" Jul 14 21:51:08.990780 containerd[1433]: time="2025-07-14T21:51:08.990590378Z" level=info msg="StopPodSandbox for \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\"" Jul 14 21:51:08.992516 containerd[1433]: time="2025-07-14T21:51:08.991302419Z" level=info msg="StopPodSandbox for \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\"" Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.050 [INFO][4501] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.051 [INFO][4501] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" iface="eth0" netns="/var/run/netns/cni-15258bc7-f172-705d-2cd9-207f94804ec4" Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.052 [INFO][4501] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" iface="eth0" netns="/var/run/netns/cni-15258bc7-f172-705d-2cd9-207f94804ec4" Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.052 [INFO][4501] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" iface="eth0" netns="/var/run/netns/cni-15258bc7-f172-705d-2cd9-207f94804ec4" Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.052 [INFO][4501] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.052 [INFO][4501] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.078 [INFO][4519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" HandleID="k8s-pod-network.68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.079 [INFO][4519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.079 [INFO][4519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.087 [WARNING][4519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" HandleID="k8s-pod-network.68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.087 [INFO][4519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" HandleID="k8s-pod-network.68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.089 [INFO][4519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:09.097092 containerd[1433]: 2025-07-14 21:51:09.092 [INFO][4501] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:09.097663 containerd[1433]: time="2025-07-14T21:51:09.097553156Z" level=info msg="TearDown network for sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\" successfully" Jul 14 21:51:09.097751 containerd[1433]: time="2025-07-14T21:51:09.097734316Z" level=info msg="StopPodSandbox for \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\" returns successfully" Jul 14 21:51:09.098808 containerd[1433]: time="2025-07-14T21:51:09.098762316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9n94f,Uid:c6dffae0-199e-4860-b66d-240601db16b1,Namespace:calico-system,Attempt:1,}" Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.048 [INFO][4502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.048 [INFO][4502] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" iface="eth0" netns="/var/run/netns/cni-90216848-a85c-a2a2-0af8-4b90595efe1b" Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.049 [INFO][4502] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" iface="eth0" netns="/var/run/netns/cni-90216848-a85c-a2a2-0af8-4b90595efe1b" Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.053 [INFO][4502] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" iface="eth0" netns="/var/run/netns/cni-90216848-a85c-a2a2-0af8-4b90595efe1b" Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.053 [INFO][4502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.053 [INFO][4502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.081 [INFO][4521] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" HandleID="k8s-pod-network.4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.081 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.089 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.101 [WARNING][4521] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" HandleID="k8s-pod-network.4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.101 [INFO][4521] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" HandleID="k8s-pod-network.4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.103 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:09.108770 containerd[1433]: 2025-07-14 21:51:09.105 [INFO][4502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:09.109667 containerd[1433]: time="2025-07-14T21:51:09.108870402Z" level=info msg="TearDown network for sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\" successfully" Jul 14 21:51:09.109667 containerd[1433]: time="2025-07-14T21:51:09.108894722Z" level=info msg="StopPodSandbox for \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\" returns successfully" Jul 14 21:51:09.110220 containerd[1433]: time="2025-07-14T21:51:09.110195762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddcff449d-6w6dr,Uid:93233d1f-a8c9-4f31-8a33-73445cc9215c,Namespace:calico-apiserver,Attempt:1,}" Jul 14 21:51:09.195549 kubelet[2442]: E0714 21:51:09.195217 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:51:09.232969 kubelet[2442]: I0714 21:51:09.232123 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-82m46" podStartSLOduration=32.232104387 podStartE2EDuration="32.232104387s" podCreationTimestamp="2025-07-14 21:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:51:09.212011056 +0000 UTC m=+38.333913919" watchObservedRunningTime="2025-07-14 21:51:09.232104387 +0000 UTC m=+38.354007210" Jul 14 21:51:09.306938 systemd-networkd[1364]: calidf16b91e5e9: Link UP Jul 14 21:51:09.309241 systemd-networkd[1364]: calidf16b91e5e9: Gained carrier Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.162 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0 calico-apiserver-5ddcff449d- calico-apiserver 93233d1f-a8c9-4f31-8a33-73445cc9215c 942 0 2025-07-14 21:50:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddcff449d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5ddcff449d-6w6dr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidf16b91e5e9 [] [] }} ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-6w6dr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.163 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-6w6dr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.207 [INFO][4568] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" HandleID="k8s-pod-network.3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.207 [INFO][4568] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" HandleID="k8s-pod-network.3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137390), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5ddcff449d-6w6dr", "timestamp":"2025-07-14 21:51:09.207650294 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.207 [INFO][4568] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.207 [INFO][4568] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.207 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.239 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" host="localhost" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.253 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.262 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.265 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.275 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.275 [INFO][4568] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" host="localhost" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.279 [INFO][4568] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35 Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.287 [INFO][4568] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" host="localhost" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.300 [INFO][4568] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" host="localhost" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.300 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" host="localhost" Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.300 [INFO][4568] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:09.327041 containerd[1433]: 2025-07-14 21:51:09.300 [INFO][4568] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" HandleID="k8s-pod-network.3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:09.327763 containerd[1433]: 2025-07-14 21:51:09.303 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-6w6dr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0", GenerateName:"calico-apiserver-5ddcff449d-", Namespace:"calico-apiserver", SelfLink:"", UID:"93233d1f-a8c9-4f31-8a33-73445cc9215c", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddcff449d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5ddcff449d-6w6dr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf16b91e5e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:09.327763 containerd[1433]: 2025-07-14 21:51:09.303 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-6w6dr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:09.327763 containerd[1433]: 2025-07-14 21:51:09.304 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf16b91e5e9 ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-6w6dr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:09.327763 containerd[1433]: 2025-07-14 21:51:09.310 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-6w6dr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:09.327763 containerd[1433]: 2025-07-14 21:51:09.311 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-6w6dr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0", GenerateName:"calico-apiserver-5ddcff449d-", Namespace:"calico-apiserver", SelfLink:"", UID:"93233d1f-a8c9-4f31-8a33-73445cc9215c", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddcff449d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35", Pod:"calico-apiserver-5ddcff449d-6w6dr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf16b91e5e9", MAC:"da:03:56:ba:f9:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:09.327763 containerd[1433]: 2025-07-14 21:51:09.324 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-6w6dr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:09.345499 containerd[1433]: time="2025-07-14T21:51:09.345384807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:51:09.345499 containerd[1433]: time="2025-07-14T21:51:09.345461327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:51:09.345499 containerd[1433]: time="2025-07-14T21:51:09.345477167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:09.345750 containerd[1433]: time="2025-07-14T21:51:09.345601407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:09.365794 systemd[1]: Started cri-containerd-3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35.scope - libcontainer container 3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35. Jul 14 21:51:09.373313 systemd[1]: run-netns-cni\x2d15258bc7\x2df172\x2d705d\x2d2cd9\x2d207f94804ec4.mount: Deactivated successfully. Jul 14 21:51:09.373426 systemd[1]: run-netns-cni\x2d90216848\x2da85c\x2da2a2\x2d0af8\x2d4b90595efe1b.mount: Deactivated successfully. Jul 14 21:51:09.384897 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:51:09.394265 systemd-networkd[1364]: cali4f28f49a92f: Link UP Jul 14 21:51:09.395271 systemd-networkd[1364]: cali4f28f49a92f: Gained carrier Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.162 [INFO][4540] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9n94f-eth0 csi-node-driver- calico-system c6dffae0-199e-4860-b66d-240601db16b1 943 0 2025-07-14 21:50:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9n94f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4f28f49a92f [] [] }} ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Namespace="calico-system" Pod="csi-node-driver-9n94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--9n94f-" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.163 [INFO][4540] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Namespace="calico-system" Pod="csi-node-driver-9n94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.221 [INFO][4574] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" HandleID="k8s-pod-network.be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.221 [INFO][4574] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" HandleID="k8s-pod-network.be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9n94f", "timestamp":"2025-07-14 21:51:09.221743822 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.221 [INFO][4574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.300 [INFO][4574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.300 [INFO][4574] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.335 [INFO][4574] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" host="localhost" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.354 [INFO][4574] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.360 [INFO][4574] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.363 [INFO][4574] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.370 [INFO][4574] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.370 [INFO][4574] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" host="localhost" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.373 [INFO][4574] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.380 [INFO][4574] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" host="localhost" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.388 [INFO][4574] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" host="localhost" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.388 [INFO][4574] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" host="localhost" Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.388 [INFO][4574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:09.418015 containerd[1433]: 2025-07-14 21:51:09.389 [INFO][4574] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" HandleID="k8s-pod-network.be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:09.418589 containerd[1433]: 2025-07-14 21:51:09.392 [INFO][4540] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Namespace="calico-system" Pod="csi-node-driver-9n94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--9n94f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9n94f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6dffae0-199e-4860-b66d-240601db16b1", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9n94f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f28f49a92f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:09.418589 containerd[1433]: 2025-07-14 21:51:09.392 [INFO][4540] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Namespace="calico-system" Pod="csi-node-driver-9n94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:09.418589 containerd[1433]: 2025-07-14 21:51:09.392 [INFO][4540] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f28f49a92f ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Namespace="calico-system" Pod="csi-node-driver-9n94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:09.418589 containerd[1433]: 2025-07-14 21:51:09.395 [INFO][4540] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Namespace="calico-system" Pod="csi-node-driver-9n94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:09.418589 containerd[1433]: 2025-07-14 21:51:09.395 [INFO][4540] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Namespace="calico-system" Pod="csi-node-driver-9n94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--9n94f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9n94f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6dffae0-199e-4860-b66d-240601db16b1", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d", Pod:"csi-node-driver-9n94f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f28f49a92f", MAC:"ea:4c:d4:4b:0d:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:09.418589 containerd[1433]: 2025-07-14 21:51:09.412 [INFO][4540] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d" Namespace="calico-system" Pod="csi-node-driver-9n94f" WorkloadEndpoint="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:09.420441 containerd[1433]: time="2025-07-14T21:51:09.420399527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddcff449d-6w6dr,Uid:93233d1f-a8c9-4f31-8a33-73445cc9215c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35\"" Jul 14 21:51:09.435953 containerd[1433]: time="2025-07-14T21:51:09.435348135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:51:09.435953 containerd[1433]: time="2025-07-14T21:51:09.435743775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:51:09.435953 containerd[1433]: time="2025-07-14T21:51:09.435757975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:09.435953 containerd[1433]: time="2025-07-14T21:51:09.435850695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:09.459782 systemd[1]: Started cri-containerd-be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d.scope - libcontainer container be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d. Jul 14 21:51:09.469730 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:51:09.479612 containerd[1433]: time="2025-07-14T21:51:09.479566678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9n94f,Uid:c6dffae0-199e-4860-b66d-240601db16b1,Namespace:calico-system,Attempt:1,} returns sandbox id \"be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d\"" Jul 14 21:51:09.931695 systemd-networkd[1364]: cali082bf03dcbd: Gained IPv6LL Jul 14 21:51:09.992579 containerd[1433]: time="2025-07-14T21:51:09.991809430Z" level=info msg="StopPodSandbox for \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\"" Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.047 [INFO][4703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.047 [INFO][4703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" iface="eth0" netns="/var/run/netns/cni-26f2f4a9-2b46-dc6a-bffb-a460cd312d74" Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.047 [INFO][4703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" iface="eth0" netns="/var/run/netns/cni-26f2f4a9-2b46-dc6a-bffb-a460cd312d74" Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.047 [INFO][4703] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" iface="eth0" netns="/var/run/netns/cni-26f2f4a9-2b46-dc6a-bffb-a460cd312d74" Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.047 [INFO][4703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.047 [INFO][4703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.072 [INFO][4711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" HandleID="k8s-pod-network.20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.072 [INFO][4711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.072 [INFO][4711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.082 [WARNING][4711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" HandleID="k8s-pod-network.20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.082 [INFO][4711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" HandleID="k8s-pod-network.20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.084 [INFO][4711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:10.090678 containerd[1433]: 2025-07-14 21:51:10.088 [INFO][4703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:10.094514 containerd[1433]: time="2025-07-14T21:51:10.090814800Z" level=info msg="TearDown network for sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\" successfully" Jul 14 21:51:10.094514 containerd[1433]: time="2025-07-14T21:51:10.090842720Z" level=info msg="StopPodSandbox for \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\" returns successfully" Jul 14 21:51:10.093088 systemd[1]: run-netns-cni\x2d26f2f4a9\x2d2b46\x2ddc6a\x2dbffb\x2da460cd312d74.mount: Deactivated successfully. Jul 14 21:51:10.094713 kubelet[2442]: E0714 21:51:10.092804 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:51:10.094988 containerd[1433]: time="2025-07-14T21:51:10.094551922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r2zgm,Uid:21056f12-8439-463a-a28c-d3964e6d90bc,Namespace:kube-system,Attempt:1,}" Jul 14 21:51:10.188609 systemd-networkd[1364]: calia8de201f017: Gained IPv6LL Jul 14 21:51:10.204775 kubelet[2442]: E0714 21:51:10.204745 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:51:10.234062 systemd-networkd[1364]: calidbab93df282: Link UP Jul 14 21:51:10.234873 systemd-networkd[1364]: calidbab93df282: Gained carrier Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.158 [INFO][4719] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0 coredns-668d6bf9bc- kube-system 21056f12-8439-463a-a28c-d3964e6d90bc 965 0 2025-07-14 21:50:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-r2zgm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidbab93df282 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2zgm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2zgm-" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.158 [INFO][4719] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2zgm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.187 [INFO][4734] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" HandleID="k8s-pod-network.e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.187 [INFO][4734] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" HandleID="k8s-pod-network.e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-r2zgm", "timestamp":"2025-07-14 21:51:10.187473088 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.187 [INFO][4734] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.187 [INFO][4734] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.187 [INFO][4734] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.199 [INFO][4734] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" host="localhost" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.206 [INFO][4734] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.211 [INFO][4734] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.213 [INFO][4734] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.215 [INFO][4734] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.216 [INFO][4734] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" host="localhost" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.217 [INFO][4734] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.221 [INFO][4734] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" host="localhost" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.228 [INFO][4734] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" host="localhost" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.229 [INFO][4734] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" host="localhost" Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.229 [INFO][4734] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:10.248516 containerd[1433]: 2025-07-14 21:51:10.229 [INFO][4734] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" HandleID="k8s-pod-network.e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:10.249095 containerd[1433]: 2025-07-14 21:51:10.231 [INFO][4719] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2zgm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21056f12-8439-463a-a28c-d3964e6d90bc", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-r2zgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbab93df282", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:10.249095 containerd[1433]: 2025-07-14 21:51:10.232 [INFO][4719] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2zgm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:10.249095 containerd[1433]: 2025-07-14 21:51:10.232 [INFO][4719] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidbab93df282 ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2zgm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:10.249095 containerd[1433]: 2025-07-14 21:51:10.233 [INFO][4719] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2zgm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:10.249095 containerd[1433]: 2025-07-14 21:51:10.234 [INFO][4719] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2zgm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21056f12-8439-463a-a28c-d3964e6d90bc", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce", Pod:"coredns-668d6bf9bc-r2zgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbab93df282", MAC:"fa:51:cc:cb:c4:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:10.249095 containerd[1433]: 2025-07-14 21:51:10.244 [INFO][4719] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce" Namespace="kube-system" Pod="coredns-668d6bf9bc-r2zgm" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:10.267622 containerd[1433]: time="2025-07-14T21:51:10.265655327Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:51:10.267622 containerd[1433]: time="2025-07-14T21:51:10.265726167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:51:10.267622 containerd[1433]: time="2025-07-14T21:51:10.265748607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:10.267622 containerd[1433]: time="2025-07-14T21:51:10.265831207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:10.287712 systemd[1]: Started cri-containerd-e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce.scope - libcontainer container e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce. Jul 14 21:51:10.300289 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:51:10.318041 containerd[1433]: time="2025-07-14T21:51:10.317827713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r2zgm,Uid:21056f12-8439-463a-a28c-d3964e6d90bc,Namespace:kube-system,Attempt:1,} returns sandbox id \"e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce\"" Jul 14 21:51:10.318965 kubelet[2442]: E0714 21:51:10.318937 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:51:10.321420 containerd[1433]: time="2025-07-14T21:51:10.321384595Z" level=info msg="CreateContainer within sandbox \"e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 21:51:10.337709 containerd[1433]: time="2025-07-14T21:51:10.337667483Z" level=info msg="CreateContainer within sandbox \"e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"868197ebcf7a0a4eb4cb59975296551ab124991c0a31127fcd6b730762192d2a\"" Jul 14 21:51:10.339619 containerd[1433]: time="2025-07-14T21:51:10.338711443Z" level=info msg="StartContainer for \"868197ebcf7a0a4eb4cb59975296551ab124991c0a31127fcd6b730762192d2a\"" Jul 14 21:51:10.387733 systemd[1]: Started cri-containerd-868197ebcf7a0a4eb4cb59975296551ab124991c0a31127fcd6b730762192d2a.scope - libcontainer container 868197ebcf7a0a4eb4cb59975296551ab124991c0a31127fcd6b730762192d2a. Jul 14 21:51:10.407966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996329158.mount: Deactivated successfully. Jul 14 21:51:10.438848 containerd[1433]: time="2025-07-14T21:51:10.438742573Z" level=info msg="StartContainer for \"868197ebcf7a0a4eb4cb59975296551ab124991c0a31127fcd6b730762192d2a\" returns successfully" Jul 14 21:51:10.635881 systemd-networkd[1364]: calidf16b91e5e9: Gained IPv6LL Jul 14 21:51:10.740872 containerd[1433]: time="2025-07-14T21:51:10.740820923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:10.741425 containerd[1433]: time="2025-07-14T21:51:10.741388004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 14 21:51:10.742419 containerd[1433]: time="2025-07-14T21:51:10.742359084Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:10.744993 containerd[1433]: time="2025-07-14T21:51:10.744938525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:10.747210 containerd[1433]: time="2025-07-14T21:51:10.746851006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.180522588s" Jul 14 21:51:10.747210 containerd[1433]: time="2025-07-14T21:51:10.746899526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 14 21:51:10.749130 containerd[1433]: time="2025-07-14T21:51:10.748928007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 21:51:10.750894 containerd[1433]: time="2025-07-14T21:51:10.750798608Z" level=info msg="CreateContainer within sandbox \"7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 21:51:10.763442 containerd[1433]: time="2025-07-14T21:51:10.763300775Z" level=info msg="CreateContainer within sandbox \"7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cc8f373c6f76142474a75f669ee5f53411197d45cb4d8dca006e99c3cad2fde7\"" Jul 14 21:51:10.765553 containerd[1433]: time="2025-07-14T21:51:10.765173296Z" level=info msg="StartContainer for \"cc8f373c6f76142474a75f669ee5f53411197d45cb4d8dca006e99c3cad2fde7\"" Jul 14 21:51:10.810749 systemd[1]: Started cri-containerd-cc8f373c6f76142474a75f669ee5f53411197d45cb4d8dca006e99c3cad2fde7.scope - libcontainer container cc8f373c6f76142474a75f669ee5f53411197d45cb4d8dca006e99c3cad2fde7. Jul 14 21:51:10.845389 containerd[1433]: time="2025-07-14T21:51:10.845322175Z" level=info msg="StartContainer for \"cc8f373c6f76142474a75f669ee5f53411197d45cb4d8dca006e99c3cad2fde7\" returns successfully" Jul 14 21:51:11.018733 systemd-networkd[1364]: cali4f28f49a92f: Gained IPv6LL Jul 14 21:51:11.215873 kubelet[2442]: E0714 21:51:11.215830 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:51:11.216291 kubelet[2442]: E0714 21:51:11.215919 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:51:11.245959 kubelet[2442]: I0714 21:51:11.245880 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-jcm2t" podStartSLOduration=20.063212818 podStartE2EDuration="22.245859887s" podCreationTimestamp="2025-07-14 21:50:49 +0000 UTC" firstStartedPulling="2025-07-14 21:51:08.565308858 +0000 UTC m=+37.687211681" lastFinishedPulling="2025-07-14 21:51:10.747955927 +0000 UTC m=+39.869858750" observedRunningTime="2025-07-14 21:51:11.226230358 +0000 UTC m=+40.348133181" watchObservedRunningTime="2025-07-14 21:51:11.245859887 +0000 UTC m=+40.367762670" Jul 14 21:51:11.402702 systemd-networkd[1364]: calidbab93df282: Gained IPv6LL Jul 14 21:51:11.638836 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:49948.service - OpenSSH per-connection server daemon (10.0.0.1:49948). Jul 14 21:51:11.688696 sshd[4914]: Accepted publickey for core from 10.0.0.1 port 49948 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:11.690231 sshd[4914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:11.695867 systemd-logind[1412]: New session 8 of user core. Jul 14 21:51:11.702680 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 21:51:11.956371 sshd[4914]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:11.959294 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:49948.service: Deactivated successfully. Jul 14 21:51:11.963469 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 21:51:11.965471 systemd-logind[1412]: Session 8 logged out. Waiting for processes to exit. Jul 14 21:51:11.966598 systemd-logind[1412]: Removed session 8. Jul 14 21:51:11.989926 containerd[1433]: time="2025-07-14T21:51:11.989511594Z" level=info msg="StopPodSandbox for \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\"" Jul 14 21:51:11.989926 containerd[1433]: time="2025-07-14T21:51:11.989703274Z" level=info msg="StopPodSandbox for \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\"" Jul 14 21:51:12.057599 kubelet[2442]: I0714 21:51:12.057522 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r2zgm" podStartSLOduration=35.057501984 podStartE2EDuration="35.057501984s" podCreationTimestamp="2025-07-14 21:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 21:51:11.246293607 +0000 UTC m=+40.368196430" watchObservedRunningTime="2025-07-14 21:51:12.057501984 +0000 UTC m=+41.179404807" Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.056 [INFO][4945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.057 [INFO][4945] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" iface="eth0" netns="/var/run/netns/cni-0efe4276-b4bf-e846-6104-aff8fb28ce8d" Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.057 [INFO][4945] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" iface="eth0" netns="/var/run/netns/cni-0efe4276-b4bf-e846-6104-aff8fb28ce8d" Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.057 [INFO][4945] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" iface="eth0" netns="/var/run/netns/cni-0efe4276-b4bf-e846-6104-aff8fb28ce8d" Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.057 [INFO][4945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.057 [INFO][4945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.083 [INFO][4965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" HandleID="k8s-pod-network.cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.084 [INFO][4965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.084 [INFO][4965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.093 [WARNING][4965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" HandleID="k8s-pod-network.cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.093 [INFO][4965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" HandleID="k8s-pod-network.cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.124 [INFO][4965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:12.134929 containerd[1433]: 2025-07-14 21:51:12.126 [INFO][4945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:12.138583 containerd[1433]: time="2025-07-14T21:51:12.135554738Z" level=info msg="TearDown network for sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\" successfully" Jul 14 21:51:12.138583 containerd[1433]: time="2025-07-14T21:51:12.135656858Z" level=info msg="StopPodSandbox for \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\" returns successfully" Jul 14 21:51:12.138583 containerd[1433]: time="2025-07-14T21:51:12.138415900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcd7777df-hlm2l,Uid:8792d4f6-4ad7-4255-a985-d0b68c2e01d9,Namespace:calico-system,Attempt:1,}" Jul 14 21:51:12.140672 systemd[1]: run-netns-cni\x2d0efe4276\x2db4bf\x2de846\x2d6104\x2daff8fb28ce8d.mount: Deactivated successfully. Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.060 [INFO][4953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.060 [INFO][4953] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" iface="eth0" netns="/var/run/netns/cni-d0a7ab29-9ec5-5918-1b20-a5acc050799a" Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.060 [INFO][4953] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" iface="eth0" netns="/var/run/netns/cni-d0a7ab29-9ec5-5918-1b20-a5acc050799a" Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.060 [INFO][4953] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" iface="eth0" netns="/var/run/netns/cni-d0a7ab29-9ec5-5918-1b20-a5acc050799a" Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.060 [INFO][4953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.060 [INFO][4953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.085 [INFO][4971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" HandleID="k8s-pod-network.b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.086 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.124 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.137 [WARNING][4971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" HandleID="k8s-pod-network.b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.137 [INFO][4971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" HandleID="k8s-pod-network.b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.140 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:12.148886 containerd[1433]: 2025-07-14 21:51:12.146 [INFO][4953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:12.149733 containerd[1433]: time="2025-07-14T21:51:12.149694145Z" level=info msg="TearDown network for sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\" successfully" Jul 14 21:51:12.149733 containerd[1433]: time="2025-07-14T21:51:12.149730945Z" level=info msg="StopPodSandbox for \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\" returns successfully" Jul 14 21:51:12.150410 containerd[1433]: time="2025-07-14T21:51:12.150339065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddcff449d-zw4fd,Uid:d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6,Namespace:calico-apiserver,Attempt:1,}" Jul 14 21:51:12.151874 systemd[1]: run-netns-cni\x2dd0a7ab29\x2d9ec5\x2d5918\x2d1b20\x2da5acc050799a.mount: Deactivated successfully. Jul 14 21:51:12.217608 kubelet[2442]: E0714 21:51:12.217497 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:51:12.286140 systemd-networkd[1364]: calif73848623a8: Link UP Jul 14 21:51:12.286344 systemd-networkd[1364]: calif73848623a8: Gained carrier Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.199 [INFO][4984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0 calico-kube-controllers-5fcd7777df- calico-system 8792d4f6-4ad7-4255-a985-d0b68c2e01d9 1028 0 2025-07-14 21:50:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fcd7777df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5fcd7777df-hlm2l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif73848623a8 [] [] }} ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Namespace="calico-system" Pod="calico-kube-controllers-5fcd7777df-hlm2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.199 [INFO][4984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Namespace="calico-system" Pod="calico-kube-controllers-5fcd7777df-hlm2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.237 [INFO][5014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" HandleID="k8s-pod-network.623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.237 [INFO][5014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" HandleID="k8s-pod-network.623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5fcd7777df-hlm2l", "timestamp":"2025-07-14 21:51:12.237766223 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.237 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.238 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.238 [INFO][5014] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.250 [INFO][5014] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" host="localhost" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.258 [INFO][5014] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.263 [INFO][5014] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.265 [INFO][5014] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.267 [INFO][5014] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.267 [INFO][5014] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" host="localhost" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.269 [INFO][5014] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6 Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.273 [INFO][5014] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" host="localhost" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.278 [INFO][5014] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" host="localhost" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.278 [INFO][5014] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" host="localhost" Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.278 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:12.306889 containerd[1433]: 2025-07-14 21:51:12.278 [INFO][5014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" HandleID="k8s-pod-network.623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:12.307401 containerd[1433]: 2025-07-14 21:51:12.281 [INFO][4984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Namespace="calico-system" Pod="calico-kube-controllers-5fcd7777df-hlm2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0", GenerateName:"calico-kube-controllers-5fcd7777df-", Namespace:"calico-system", SelfLink:"", UID:"8792d4f6-4ad7-4255-a985-d0b68c2e01d9", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fcd7777df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5fcd7777df-hlm2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif73848623a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:12.307401 containerd[1433]: 2025-07-14 21:51:12.282 [INFO][4984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Namespace="calico-system" Pod="calico-kube-controllers-5fcd7777df-hlm2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:12.307401 containerd[1433]: 2025-07-14 21:51:12.282 [INFO][4984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif73848623a8 ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Namespace="calico-system" Pod="calico-kube-controllers-5fcd7777df-hlm2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:12.307401 containerd[1433]: 2025-07-14 21:51:12.287 [INFO][4984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Namespace="calico-system" Pod="calico-kube-controllers-5fcd7777df-hlm2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:12.307401 containerd[1433]: 2025-07-14 21:51:12.287 [INFO][4984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Namespace="calico-system" Pod="calico-kube-controllers-5fcd7777df-hlm2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0", GenerateName:"calico-kube-controllers-5fcd7777df-", Namespace:"calico-system", SelfLink:"", UID:"8792d4f6-4ad7-4255-a985-d0b68c2e01d9", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fcd7777df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6", Pod:"calico-kube-controllers-5fcd7777df-hlm2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif73848623a8", MAC:"ba:f7:54:8e:73:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:12.307401 containerd[1433]: 2025-07-14 21:51:12.301 [INFO][4984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6" Namespace="calico-system" Pod="calico-kube-controllers-5fcd7777df-hlm2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:12.337377 containerd[1433]: time="2025-07-14T21:51:12.336841786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:51:12.337377 containerd[1433]: time="2025-07-14T21:51:12.336920106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:51:12.337377 containerd[1433]: time="2025-07-14T21:51:12.336931746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:12.337377 containerd[1433]: time="2025-07-14T21:51:12.337076706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:12.355741 systemd[1]: Started cri-containerd-623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6.scope - libcontainer container 623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6. Jul 14 21:51:12.380096 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:51:12.400731 systemd-networkd[1364]: cali819a31c2137: Link UP Jul 14 21:51:12.402083 systemd-networkd[1364]: cali819a31c2137: Gained carrier Jul 14 21:51:12.415887 containerd[1433]: time="2025-07-14T21:51:12.415823341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fcd7777df-hlm2l,Uid:8792d4f6-4ad7-4255-a985-d0b68c2e01d9,Namespace:calico-system,Attempt:1,} returns sandbox id \"623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6\"" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.220 [INFO][4996] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0 calico-apiserver-5ddcff449d- calico-apiserver d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6 1027 0 2025-07-14 21:50:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddcff449d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5ddcff449d-zw4fd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali819a31c2137 [] [] }} ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-zw4fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.220 [INFO][4996] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-zw4fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.258 [INFO][5028] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" HandleID="k8s-pod-network.2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.258 [INFO][5028] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" HandleID="k8s-pod-network.2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5ddcff449d-zw4fd", "timestamp":"2025-07-14 21:51:12.258048312 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.258 [INFO][5028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.278 [INFO][5028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.278 [INFO][5028] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.351 [INFO][5028] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" host="localhost" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.359 [INFO][5028] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.370 [INFO][5028] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.373 [INFO][5028] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.375 [INFO][5028] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.375 [INFO][5028] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" host="localhost" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.377 [INFO][5028] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9 Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.383 [INFO][5028] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" host="localhost" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.391 [INFO][5028] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" host="localhost" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.391 [INFO][5028] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" host="localhost" Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.391 [INFO][5028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:12.428605 containerd[1433]: 2025-07-14 21:51:12.391 [INFO][5028] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" HandleID="k8s-pod-network.2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:12.429496 containerd[1433]: 2025-07-14 21:51:12.395 [INFO][4996] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-zw4fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0", GenerateName:"calico-apiserver-5ddcff449d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddcff449d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5ddcff449d-zw4fd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali819a31c2137", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:12.429496 containerd[1433]: 2025-07-14 21:51:12.396 [INFO][4996] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-zw4fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:12.429496 containerd[1433]: 2025-07-14 21:51:12.396 [INFO][4996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali819a31c2137 ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-zw4fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:12.429496 containerd[1433]: 2025-07-14 21:51:12.402 [INFO][4996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-zw4fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:12.429496 containerd[1433]: 2025-07-14 21:51:12.404 [INFO][4996] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-zw4fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0", GenerateName:"calico-apiserver-5ddcff449d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddcff449d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9", Pod:"calico-apiserver-5ddcff449d-zw4fd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali819a31c2137", MAC:"76:e0:1c:7a:7c:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:12.429496 containerd[1433]: 2025-07-14 21:51:12.425 [INFO][4996] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9" Namespace="calico-apiserver" Pod="calico-apiserver-5ddcff449d-zw4fd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:12.446860 containerd[1433]: time="2025-07-14T21:51:12.446751234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 21:51:12.446860 containerd[1433]: time="2025-07-14T21:51:12.446818914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 21:51:12.447118 containerd[1433]: time="2025-07-14T21:51:12.446834234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:12.447118 containerd[1433]: time="2025-07-14T21:51:12.446926875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 21:51:12.469772 systemd[1]: Started cri-containerd-2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9.scope - libcontainer container 2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9. Jul 14 21:51:12.486160 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 21:51:12.516633 containerd[1433]: time="2025-07-14T21:51:12.516593505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddcff449d-zw4fd,Uid:d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9\"" Jul 14 21:51:12.827106 containerd[1433]: time="2025-07-14T21:51:12.827057521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:12.827907 containerd[1433]: time="2025-07-14T21:51:12.827738561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 14 21:51:12.828837 containerd[1433]: time="2025-07-14T21:51:12.828592641Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:12.830967 containerd[1433]: time="2025-07-14T21:51:12.830773522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:12.831793 containerd[1433]: time="2025-07-14T21:51:12.831755283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.082792756s" Jul 14 21:51:12.831847 containerd[1433]: time="2025-07-14T21:51:12.831798363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 14 21:51:12.833013 containerd[1433]: time="2025-07-14T21:51:12.832988763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 21:51:12.834461 containerd[1433]: time="2025-07-14T21:51:12.834427604Z" level=info msg="CreateContainer within sandbox \"3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 21:51:12.845579 containerd[1433]: time="2025-07-14T21:51:12.844728849Z" level=info msg="CreateContainer within sandbox \"3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d8deaf0e3ec5a55bb71353a7abdfacc98f7887f72f41bc8c135a6bd5aee13a6\"" Jul 14 21:51:12.846461 containerd[1433]: time="2025-07-14T21:51:12.845374929Z" level=info msg="StartContainer for \"8d8deaf0e3ec5a55bb71353a7abdfacc98f7887f72f41bc8c135a6bd5aee13a6\"" Jul 14 21:51:12.873804 systemd[1]: Started cri-containerd-8d8deaf0e3ec5a55bb71353a7abdfacc98f7887f72f41bc8c135a6bd5aee13a6.scope - libcontainer container 8d8deaf0e3ec5a55bb71353a7abdfacc98f7887f72f41bc8c135a6bd5aee13a6. Jul 14 21:51:12.968644 containerd[1433]: time="2025-07-14T21:51:12.968597343Z" level=info msg="StartContainer for \"8d8deaf0e3ec5a55bb71353a7abdfacc98f7887f72f41bc8c135a6bd5aee13a6\" returns successfully" Jul 14 21:51:13.227869 kubelet[2442]: E0714 21:51:13.227223 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 21:51:13.241857 kubelet[2442]: I0714 21:51:13.241248 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ddcff449d-6w6dr" podStartSLOduration=24.83058954 podStartE2EDuration="28.241226535s" podCreationTimestamp="2025-07-14 21:50:45 +0000 UTC" firstStartedPulling="2025-07-14 21:51:09.422144048 +0000 UTC m=+38.544046871" lastFinishedPulling="2025-07-14 21:51:12.832780963 +0000 UTC m=+41.954683866" observedRunningTime="2025-07-14 21:51:13.240188015 +0000 UTC m=+42.362090838" watchObservedRunningTime="2025-07-14 21:51:13.241226535 +0000 UTC m=+42.363129358" Jul 14 21:51:13.578814 systemd-networkd[1364]: cali819a31c2137: Gained IPv6LL Jul 14 21:51:13.908864 containerd[1433]: time="2025-07-14T21:51:13.908733209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:13.911439 containerd[1433]: time="2025-07-14T21:51:13.909454689Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 14 21:51:13.911439 containerd[1433]: time="2025-07-14T21:51:13.910303370Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:13.912842 containerd[1433]: time="2025-07-14T21:51:13.912808931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:13.913805 containerd[1433]: time="2025-07-14T21:51:13.913767371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.080747448s" Jul 14 21:51:13.913869 containerd[1433]: time="2025-07-14T21:51:13.913812651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 14 21:51:13.917050 containerd[1433]: time="2025-07-14T21:51:13.917022133Z" level=info msg="CreateContainer within sandbox \"be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 21:51:13.917710 containerd[1433]: time="2025-07-14T21:51:13.917676533Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 21:51:13.935940 containerd[1433]: time="2025-07-14T21:51:13.935895260Z" level=info msg="CreateContainer within sandbox \"be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"905a6c8c26ac4658ef3b443385d7343d7d547892c8abea48abbc142cba987dfb\"" Jul 14 21:51:13.936826 containerd[1433]: time="2025-07-14T21:51:13.936795021Z" level=info msg="StartContainer for \"905a6c8c26ac4658ef3b443385d7343d7d547892c8abea48abbc142cba987dfb\"" Jul 14 21:51:13.971786 systemd[1]: Started cri-containerd-905a6c8c26ac4658ef3b443385d7343d7d547892c8abea48abbc142cba987dfb.scope - libcontainer container 905a6c8c26ac4658ef3b443385d7343d7d547892c8abea48abbc142cba987dfb. Jul 14 21:51:14.021393 containerd[1433]: time="2025-07-14T21:51:14.021334855Z" level=info msg="StartContainer for \"905a6c8c26ac4658ef3b443385d7343d7d547892c8abea48abbc142cba987dfb\" returns successfully" Jul 14 21:51:14.235690 kubelet[2442]: I0714 21:51:14.235625 2442 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:51:14.283690 systemd-networkd[1364]: calif73848623a8: Gained IPv6LL Jul 14 21:51:16.134108 containerd[1433]: time="2025-07-14T21:51:16.134063597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:16.135336 containerd[1433]: time="2025-07-14T21:51:16.134793277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 14 21:51:16.136887 containerd[1433]: time="2025-07-14T21:51:16.136834758Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:16.138924 containerd[1433]: time="2025-07-14T21:51:16.138883678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:16.140494 containerd[1433]: time="2025-07-14T21:51:16.140458959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.222742386s" Jul 14 21:51:16.140645 containerd[1433]: time="2025-07-14T21:51:16.140497799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 14 21:51:16.142317 containerd[1433]: time="2025-07-14T21:51:16.142259520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 21:51:16.154453 containerd[1433]: time="2025-07-14T21:51:16.154172564Z" level=info msg="CreateContainer within sandbox \"623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 21:51:16.171191 containerd[1433]: time="2025-07-14T21:51:16.171101649Z" level=info msg="CreateContainer within sandbox \"623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"30281f09b2b20579b426344d722bb5fcdd8682b6b96a11d3707cca5fffe42965\"" Jul 14 21:51:16.171979 containerd[1433]: time="2025-07-14T21:51:16.171949250Z" level=info msg="StartContainer for \"30281f09b2b20579b426344d722bb5fcdd8682b6b96a11d3707cca5fffe42965\"" Jul 14 21:51:16.211764 systemd[1]: Started cri-containerd-30281f09b2b20579b426344d722bb5fcdd8682b6b96a11d3707cca5fffe42965.scope - libcontainer container 30281f09b2b20579b426344d722bb5fcdd8682b6b96a11d3707cca5fffe42965. Jul 14 21:51:16.355253 containerd[1433]: time="2025-07-14T21:51:16.355142872Z" level=info msg="StartContainer for \"30281f09b2b20579b426344d722bb5fcdd8682b6b96a11d3707cca5fffe42965\" returns successfully" Jul 14 21:51:16.441060 containerd[1433]: time="2025-07-14T21:51:16.440913980Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:16.442748 containerd[1433]: time="2025-07-14T21:51:16.442702581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 21:51:16.445162 containerd[1433]: time="2025-07-14T21:51:16.445121822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 302.824062ms" Jul 14 21:51:16.445162 containerd[1433]: time="2025-07-14T21:51:16.445163782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 14 21:51:16.446046 containerd[1433]: time="2025-07-14T21:51:16.446009422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 21:51:16.447082 containerd[1433]: time="2025-07-14T21:51:16.447049983Z" level=info msg="CreateContainer within sandbox \"2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 21:51:16.483048 containerd[1433]: time="2025-07-14T21:51:16.482908315Z" level=info msg="CreateContainer within sandbox \"2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"32b696056749ac98e627232222a076ffb735eff61a90980a58eabb122b8a74d5\"" Jul 14 21:51:16.483768 containerd[1433]: time="2025-07-14T21:51:16.483697915Z" level=info msg="StartContainer for \"32b696056749ac98e627232222a076ffb735eff61a90980a58eabb122b8a74d5\"" Jul 14 21:51:16.510807 systemd[1]: Started cri-containerd-32b696056749ac98e627232222a076ffb735eff61a90980a58eabb122b8a74d5.scope - libcontainer container 32b696056749ac98e627232222a076ffb735eff61a90980a58eabb122b8a74d5. Jul 14 21:51:16.544397 containerd[1433]: time="2025-07-14T21:51:16.544328895Z" level=info msg="StartContainer for \"32b696056749ac98e627232222a076ffb735eff61a90980a58eabb122b8a74d5\" returns successfully" Jul 14 21:51:16.971438 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:45214.service - OpenSSH per-connection server daemon (10.0.0.1:45214). Jul 14 21:51:17.063589 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 45214 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:17.065385 sshd[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:17.071743 systemd-logind[1412]: New session 9 of user core. Jul 14 21:51:17.078753 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 21:51:17.321581 kubelet[2442]: I0714 21:51:17.319554 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ddcff449d-zw4fd" podStartSLOduration=28.392206915 podStartE2EDuration="32.319536671s" podCreationTimestamp="2025-07-14 21:50:45 +0000 UTC" firstStartedPulling="2025-07-14 21:51:12.518553026 +0000 UTC m=+41.640455849" lastFinishedPulling="2025-07-14 21:51:16.445882782 +0000 UTC m=+45.567785605" observedRunningTime="2025-07-14 21:51:17.281149938 +0000 UTC m=+46.403052761" watchObservedRunningTime="2025-07-14 21:51:17.319536671 +0000 UTC m=+46.441439454" Jul 14 21:51:17.697213 sshd[5346]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:17.702808 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:45214.service: Deactivated successfully. Jul 14 21:51:17.708294 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 21:51:17.709588 systemd-logind[1412]: Session 9 logged out. Waiting for processes to exit. Jul 14 21:51:17.711150 systemd-logind[1412]: Removed session 9. Jul 14 21:51:17.736894 containerd[1433]: time="2025-07-14T21:51:17.736824883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:17.737553 containerd[1433]: time="2025-07-14T21:51:17.737520123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 14 21:51:17.738600 containerd[1433]: time="2025-07-14T21:51:17.738570803Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:17.741504 containerd[1433]: time="2025-07-14T21:51:17.741464524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 21:51:17.742279 containerd[1433]: time="2025-07-14T21:51:17.742243005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.296198423s" Jul 14 21:51:17.742335 containerd[1433]: time="2025-07-14T21:51:17.742280525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 14 21:51:17.744574 containerd[1433]: time="2025-07-14T21:51:17.744503525Z" level=info msg="CreateContainer within sandbox \"be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 21:51:17.760339 containerd[1433]: time="2025-07-14T21:51:17.760295170Z" level=info msg="CreateContainer within sandbox \"be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"754333153a6e53f8bb74da8bd6a96affb6e125627efe5b54276254dbb4495feb\"" Jul 14 21:51:17.761181 containerd[1433]: time="2025-07-14T21:51:17.761152931Z" level=info msg="StartContainer for \"754333153a6e53f8bb74da8bd6a96affb6e125627efe5b54276254dbb4495feb\"" Jul 14 21:51:17.803809 systemd[1]: Started cri-containerd-754333153a6e53f8bb74da8bd6a96affb6e125627efe5b54276254dbb4495feb.scope - libcontainer container 754333153a6e53f8bb74da8bd6a96affb6e125627efe5b54276254dbb4495feb. Jul 14 21:51:17.864972 containerd[1433]: time="2025-07-14T21:51:17.864925283Z" level=info msg="StartContainer for \"754333153a6e53f8bb74da8bd6a96affb6e125627efe5b54276254dbb4495feb\" returns successfully" Jul 14 21:51:18.067529 kubelet[2442]: I0714 21:51:18.067485 2442 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 21:51:18.077783 kubelet[2442]: I0714 21:51:18.077734 2442 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 21:51:18.262448 kubelet[2442]: I0714 21:51:18.262417 2442 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:51:18.262911 kubelet[2442]: I0714 21:51:18.262892 2442 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:51:18.275399 kubelet[2442]: I0714 21:51:18.275328 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9n94f" podStartSLOduration=21.013172282 podStartE2EDuration="29.275311688s" podCreationTimestamp="2025-07-14 21:50:49 +0000 UTC" firstStartedPulling="2025-07-14 21:51:09.480758399 +0000 UTC m=+38.602661222" lastFinishedPulling="2025-07-14 21:51:17.742897805 +0000 UTC m=+46.864800628" observedRunningTime="2025-07-14 21:51:18.273829808 +0000 UTC m=+47.395732631" watchObservedRunningTime="2025-07-14 21:51:18.275311688 +0000 UTC m=+47.397214511" Jul 14 21:51:18.275955 kubelet[2442]: I0714 21:51:18.275912 2442 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5fcd7777df-hlm2l" podStartSLOduration=25.554023792 podStartE2EDuration="29.275900248s" podCreationTimestamp="2025-07-14 21:50:49 +0000 UTC" firstStartedPulling="2025-07-14 21:51:12.419912463 +0000 UTC m=+41.541815286" lastFinishedPulling="2025-07-14 21:51:16.141788919 +0000 UTC m=+45.263691742" observedRunningTime="2025-07-14 21:51:17.320009911 +0000 UTC m=+46.441912694" watchObservedRunningTime="2025-07-14 21:51:18.275900248 +0000 UTC m=+47.397803071" Jul 14 21:51:20.902158 kubelet[2442]: I0714 21:51:20.902093 2442 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:51:22.709157 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:49424.service - OpenSSH per-connection server daemon (10.0.0.1:49424). Jul 14 21:51:22.765018 sshd[5450]: Accepted publickey for core from 10.0.0.1 port 49424 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:22.765777 sshd[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:22.769880 systemd-logind[1412]: New session 10 of user core. Jul 14 21:51:22.774741 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 21:51:22.953572 sshd[5450]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:22.962329 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:49424.service: Deactivated successfully. Jul 14 21:51:22.965674 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 21:51:22.969957 systemd-logind[1412]: Session 10 logged out. Waiting for processes to exit. Jul 14 21:51:22.979871 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:49426.service - OpenSSH per-connection server daemon (10.0.0.1:49426). Jul 14 21:51:22.983114 systemd-logind[1412]: Removed session 10. Jul 14 21:51:23.019090 sshd[5466]: Accepted publickey for core from 10.0.0.1 port 49426 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:23.021103 sshd[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:23.027388 systemd-logind[1412]: New session 11 of user core. Jul 14 21:51:23.033723 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 21:51:23.250356 sshd[5466]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:23.261301 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:49426.service: Deactivated successfully. Jul 14 21:51:23.267264 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 21:51:23.269519 systemd-logind[1412]: Session 11 logged out. Waiting for processes to exit. Jul 14 21:51:23.276969 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:49434.service - OpenSSH per-connection server daemon (10.0.0.1:49434). Jul 14 21:51:23.279052 systemd-logind[1412]: Removed session 11. Jul 14 21:51:23.318075 sshd[5478]: Accepted publickey for core from 10.0.0.1 port 49434 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:23.319487 sshd[5478]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:23.323834 systemd-logind[1412]: New session 12 of user core. Jul 14 21:51:23.343745 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 21:51:23.477514 sshd[5478]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:23.480982 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:49434.service: Deactivated successfully. Jul 14 21:51:23.482929 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 21:51:23.483546 systemd-logind[1412]: Session 12 logged out. Waiting for processes to exit. Jul 14 21:51:23.484649 systemd-logind[1412]: Removed session 12. Jul 14 21:51:28.491741 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:49438.service - OpenSSH per-connection server daemon (10.0.0.1:49438). Jul 14 21:51:28.529510 sshd[5502]: Accepted publickey for core from 10.0.0.1 port 49438 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:28.530836 sshd[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:28.534579 systemd-logind[1412]: New session 13 of user core. Jul 14 21:51:28.543754 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 21:51:28.668135 sshd[5502]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:28.678182 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:49438.service: Deactivated successfully. Jul 14 21:51:28.680028 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 21:51:28.681527 systemd-logind[1412]: Session 13 logged out. Waiting for processes to exit. Jul 14 21:51:28.693885 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:49450.service - OpenSSH per-connection server daemon (10.0.0.1:49450). Jul 14 21:51:28.695266 systemd-logind[1412]: Removed session 13. Jul 14 21:51:28.729488 sshd[5516]: Accepted publickey for core from 10.0.0.1 port 49450 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:28.730690 sshd[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:28.734522 systemd-logind[1412]: New session 14 of user core. Jul 14 21:51:28.740778 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 21:51:28.972942 sshd[5516]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:28.987304 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:49450.service: Deactivated successfully. Jul 14 21:51:28.989355 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 21:51:28.990501 systemd-logind[1412]: Session 14 logged out. Waiting for processes to exit. Jul 14 21:51:29.004974 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:49452.service - OpenSSH per-connection server daemon (10.0.0.1:49452). Jul 14 21:51:29.006247 systemd-logind[1412]: Removed session 14. Jul 14 21:51:29.042679 sshd[5529]: Accepted publickey for core from 10.0.0.1 port 49452 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:29.044054 sshd[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:29.048992 systemd-logind[1412]: New session 15 of user core. Jul 14 21:51:29.056810 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 21:51:29.672324 sshd[5529]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:29.681782 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:49452.service: Deactivated successfully. Jul 14 21:51:29.684342 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 21:51:29.685777 systemd-logind[1412]: Session 15 logged out. Waiting for processes to exit. Jul 14 21:51:29.692876 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:49460.service - OpenSSH per-connection server daemon (10.0.0.1:49460). Jul 14 21:51:29.695984 systemd-logind[1412]: Removed session 15. Jul 14 21:51:29.732784 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 49460 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:29.734618 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:29.739808 systemd-logind[1412]: New session 16 of user core. Jul 14 21:51:29.745796 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 21:51:30.113771 sshd[5549]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:30.122898 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:49460.service: Deactivated successfully. Jul 14 21:51:30.125565 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 21:51:30.126821 systemd-logind[1412]: Session 16 logged out. Waiting for processes to exit. Jul 14 21:51:30.135172 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:49476.service - OpenSSH per-connection server daemon (10.0.0.1:49476). Jul 14 21:51:30.138176 systemd-logind[1412]: Removed session 16. Jul 14 21:51:30.176101 sshd[5562]: Accepted publickey for core from 10.0.0.1 port 49476 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:30.177787 sshd[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:30.181918 systemd-logind[1412]: New session 17 of user core. Jul 14 21:51:30.190767 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 21:51:30.313203 sshd[5562]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:30.316952 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:49476.service: Deactivated successfully. Jul 14 21:51:30.319983 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 21:51:30.320648 systemd-logind[1412]: Session 17 logged out. Waiting for processes to exit. Jul 14 21:51:30.321532 systemd-logind[1412]: Removed session 17. Jul 14 21:51:30.961630 containerd[1433]: time="2025-07-14T21:51:30.961519379Z" level=info msg="StopPodSandbox for \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\"" Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.011 [WARNING][5585] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b4b62d59-0160-4a23-8603-079ff6a4f14c", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f", Pod:"goldmane-768f4c5c69-jcm2t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8de201f017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.014 [INFO][5585] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.014 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" iface="eth0" netns="" Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.014 [INFO][5585] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.014 [INFO][5585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.036 [INFO][5596] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" HandleID="k8s-pod-network.ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.036 [INFO][5596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.036 [INFO][5596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.045 [WARNING][5596] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" HandleID="k8s-pod-network.ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.045 [INFO][5596] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" HandleID="k8s-pod-network.ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.046 [INFO][5596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.050433 containerd[1433]: 2025-07-14 21:51:31.048 [INFO][5585] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:31.051054 containerd[1433]: time="2025-07-14T21:51:31.050475991Z" level=info msg="TearDown network for sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\" successfully" Jul 14 21:51:31.051054 containerd[1433]: time="2025-07-14T21:51:31.050499711Z" level=info msg="StopPodSandbox for \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\" returns successfully" Jul 14 21:51:31.051510 containerd[1433]: time="2025-07-14T21:51:31.051484511Z" level=info msg="RemovePodSandbox for \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\"" Jul 14 21:51:31.059545 containerd[1433]: time="2025-07-14T21:51:31.059504312Z" level=info msg="Forcibly stopping sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\"" Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.091 [WARNING][5614] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b4b62d59-0160-4a23-8603-079ff6a4f14c", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7afabcdea8223ff558685fe214c2706849e96bb4a471bc840fca35a0e76aac9f", Pod:"goldmane-768f4c5c69-jcm2t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8de201f017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.091 [INFO][5614] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.091 [INFO][5614] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" iface="eth0" netns="" Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.091 [INFO][5614] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.091 [INFO][5614] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.109 [INFO][5623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" HandleID="k8s-pod-network.ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.109 [INFO][5623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.109 [INFO][5623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.117 [WARNING][5623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" HandleID="k8s-pod-network.ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.117 [INFO][5623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" HandleID="k8s-pod-network.ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Workload="localhost-k8s-goldmane--768f4c5c69--jcm2t-eth0" Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.118 [INFO][5623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.122145 containerd[1433]: 2025-07-14 21:51:31.120 [INFO][5614] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e" Jul 14 21:51:31.123089 containerd[1433]: time="2025-07-14T21:51:31.122655520Z" level=info msg="TearDown network for sandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\" successfully" Jul 14 21:51:31.137186 containerd[1433]: time="2025-07-14T21:51:31.136964522Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:51:31.137186 containerd[1433]: time="2025-07-14T21:51:31.137050042Z" level=info msg="RemovePodSandbox \"ce239d5223a84391b36a04130544ab33c4fa26033b0a633b4edeb8aa66e41a4e\" returns successfully" Jul 14 21:51:31.137776 containerd[1433]: time="2025-07-14T21:51:31.137746922Z" level=info msg="StopPodSandbox for \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\"" Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.168 [WARNING][5641] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0", GenerateName:"calico-kube-controllers-5fcd7777df-", Namespace:"calico-system", SelfLink:"", UID:"8792d4f6-4ad7-4255-a985-d0b68c2e01d9", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fcd7777df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6", Pod:"calico-kube-controllers-5fcd7777df-hlm2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif73848623a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.168 [INFO][5641] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.168 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" iface="eth0" netns="" Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.168 [INFO][5641] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.168 [INFO][5641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.190 [INFO][5649] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" HandleID="k8s-pod-network.cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.190 [INFO][5649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.190 [INFO][5649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.199 [WARNING][5649] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" HandleID="k8s-pod-network.cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.200 [INFO][5649] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" HandleID="k8s-pod-network.cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.201 [INFO][5649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.208542 containerd[1433]: 2025-07-14 21:51:31.203 [INFO][5641] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:31.208542 containerd[1433]: time="2025-07-14T21:51:31.208552571Z" level=info msg="TearDown network for sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\" successfully" Jul 14 21:51:31.208542 containerd[1433]: time="2025-07-14T21:51:31.208611931Z" level=info msg="StopPodSandbox for \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\" returns successfully" Jul 14 21:51:31.210004 containerd[1433]: time="2025-07-14T21:51:31.209784372Z" level=info msg="RemovePodSandbox for \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\"" Jul 14 21:51:31.210004 containerd[1433]: time="2025-07-14T21:51:31.209814012Z" level=info msg="Forcibly stopping sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\"" Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.248 [WARNING][5668] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0", GenerateName:"calico-kube-controllers-5fcd7777df-", Namespace:"calico-system", SelfLink:"", UID:"8792d4f6-4ad7-4255-a985-d0b68c2e01d9", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fcd7777df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"623e0fe2b5c61ae5beb2584433ba8e90ee46befbf453fe7b03d2cc18109247b6", Pod:"calico-kube-controllers-5fcd7777df-hlm2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif73848623a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.248 [INFO][5668] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.248 [INFO][5668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" iface="eth0" netns="" Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.248 [INFO][5668] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.248 [INFO][5668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.267 [INFO][5677] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" HandleID="k8s-pod-network.cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.268 [INFO][5677] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.268 [INFO][5677] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.276 [WARNING][5677] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" HandleID="k8s-pod-network.cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.276 [INFO][5677] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" HandleID="k8s-pod-network.cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Workload="localhost-k8s-calico--kube--controllers--5fcd7777df--hlm2l-eth0" Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.278 [INFO][5677] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.281617 containerd[1433]: 2025-07-14 21:51:31.279 [INFO][5668] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46" Jul 14 21:51:31.282107 containerd[1433]: time="2025-07-14T21:51:31.281641421Z" level=info msg="TearDown network for sandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\" successfully" Jul 14 21:51:31.287690 containerd[1433]: time="2025-07-14T21:51:31.287635982Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:51:31.287789 containerd[1433]: time="2025-07-14T21:51:31.287711742Z" level=info msg="RemovePodSandbox \"cac9aa3831c2fbd446425d725651534b31bdc3cc01e10ffcbb7f2242d0c1de46\" returns successfully" Jul 14 21:51:31.288420 containerd[1433]: time="2025-07-14T21:51:31.288155262Z" level=info msg="StopPodSandbox for \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\"" Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.325 [WARNING][5696] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--82m46-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1ff05376-81e3-4cca-acdc-14244d51512a", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0", Pod:"coredns-668d6bf9bc-82m46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082bf03dcbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.325 [INFO][5696] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.325 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" iface="eth0" netns="" Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.325 [INFO][5696] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.325 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.343 [INFO][5704] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" HandleID="k8s-pod-network.6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.344 [INFO][5704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.344 [INFO][5704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.352 [WARNING][5704] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" HandleID="k8s-pod-network.6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.352 [INFO][5704] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" HandleID="k8s-pod-network.6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.353 [INFO][5704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.357090 containerd[1433]: 2025-07-14 21:51:31.355 [INFO][5696] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:31.357834 containerd[1433]: time="2025-07-14T21:51:31.357137070Z" level=info msg="TearDown network for sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\" successfully" Jul 14 21:51:31.357834 containerd[1433]: time="2025-07-14T21:51:31.357166190Z" level=info msg="StopPodSandbox for \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\" returns successfully" Jul 14 21:51:31.358134 containerd[1433]: time="2025-07-14T21:51:31.357921871Z" level=info msg="RemovePodSandbox for \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\"" Jul 14 21:51:31.358134 containerd[1433]: time="2025-07-14T21:51:31.357957871Z" level=info msg="Forcibly stopping sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\"" Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.390 [WARNING][5722] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--82m46-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1ff05376-81e3-4cca-acdc-14244d51512a", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"113a90bdfde97301915dff99acb2335a0aad368fa3c8cbdd96436339f8bfe1d0", Pod:"coredns-668d6bf9bc-82m46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082bf03dcbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.390 [INFO][5722] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.390 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" iface="eth0" netns="" Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.390 [INFO][5722] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.390 [INFO][5722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.408 [INFO][5730] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" HandleID="k8s-pod-network.6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.408 [INFO][5730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.408 [INFO][5730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.416 [WARNING][5730] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" HandleID="k8s-pod-network.6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.417 [INFO][5730] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" HandleID="k8s-pod-network.6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Workload="localhost-k8s-coredns--668d6bf9bc--82m46-eth0" Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.418 [INFO][5730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.423279 containerd[1433]: 2025-07-14 21:51:31.420 [INFO][5722] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003" Jul 14 21:51:31.423279 containerd[1433]: time="2025-07-14T21:51:31.422063439Z" level=info msg="TearDown network for sandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\" successfully" Jul 14 21:51:31.425650 containerd[1433]: time="2025-07-14T21:51:31.425615479Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:51:31.425804 containerd[1433]: time="2025-07-14T21:51:31.425786359Z" level=info msg="RemovePodSandbox \"6febb3c4c3c79a393edf2f9bedfcbb4b7e34ba9e8a1ce636009003671a9e0003\" returns successfully" Jul 14 21:51:31.426342 containerd[1433]: time="2025-07-14T21:51:31.426302239Z" level=info msg="StopPodSandbox for \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\"" Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.459 [WARNING][5748] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" WorkloadEndpoint="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.459 [INFO][5748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.459 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" iface="eth0" netns="" Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.459 [INFO][5748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.459 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.476 [INFO][5757] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" HandleID="k8s-pod-network.adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Workload="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.476 [INFO][5757] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.476 [INFO][5757] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.485 [WARNING][5757] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" HandleID="k8s-pod-network.adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Workload="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.485 [INFO][5757] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" HandleID="k8s-pod-network.adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Workload="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.487 [INFO][5757] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.490191 containerd[1433]: 2025-07-14 21:51:31.488 [INFO][5748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:31.490843 containerd[1433]: time="2025-07-14T21:51:31.490227168Z" level=info msg="TearDown network for sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\" successfully" Jul 14 21:51:31.490843 containerd[1433]: time="2025-07-14T21:51:31.490252768Z" level=info msg="StopPodSandbox for \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\" returns successfully" Jul 14 21:51:31.490843 containerd[1433]: time="2025-07-14T21:51:31.490808968Z" level=info msg="RemovePodSandbox for \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\"" Jul 14 21:51:31.490912 containerd[1433]: time="2025-07-14T21:51:31.490845608Z" level=info msg="Forcibly stopping sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\"" Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.521 [WARNING][5774] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" WorkloadEndpoint="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.521 [INFO][5774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.521 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" iface="eth0" netns="" Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.521 [INFO][5774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.521 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.539 [INFO][5784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" HandleID="k8s-pod-network.adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Workload="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.542 [INFO][5784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.542 [INFO][5784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.551 [WARNING][5784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" HandleID="k8s-pod-network.adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Workload="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.551 [INFO][5784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" HandleID="k8s-pod-network.adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Workload="localhost-k8s-whisker--7774d4d645--rqsfv-eth0" Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.552 [INFO][5784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.558692 containerd[1433]: 2025-07-14 21:51:31.555 [INFO][5774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd" Jul 14 21:51:31.558692 containerd[1433]: time="2025-07-14T21:51:31.557323656Z" level=info msg="TearDown network for sandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\" successfully" Jul 14 21:51:31.561757 containerd[1433]: time="2025-07-14T21:51:31.561703937Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:51:31.561967 containerd[1433]: time="2025-07-14T21:51:31.561779257Z" level=info msg="RemovePodSandbox \"adc19adaf78a23666ff46e8ef6a469af458255dc82b3f1da002c83f06695a8cd\" returns successfully" Jul 14 21:51:31.562235 containerd[1433]: time="2025-07-14T21:51:31.562202457Z" level=info msg="StopPodSandbox for \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\"" Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.594 [WARNING][5801] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21056f12-8439-463a-a28c-d3964e6d90bc", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce", Pod:"coredns-668d6bf9bc-r2zgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbab93df282", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.594 [INFO][5801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.595 [INFO][5801] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" iface="eth0" netns="" Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.595 [INFO][5801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.595 [INFO][5801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.615 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" HandleID="k8s-pod-network.20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.615 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.615 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.624 [WARNING][5810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" HandleID="k8s-pod-network.20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.624 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" HandleID="k8s-pod-network.20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.625 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.628866 containerd[1433]: 2025-07-14 21:51:31.627 [INFO][5801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:31.629672 containerd[1433]: time="2025-07-14T21:51:31.628911825Z" level=info msg="TearDown network for sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\" successfully" Jul 14 21:51:31.629672 containerd[1433]: time="2025-07-14T21:51:31.628935825Z" level=info msg="StopPodSandbox for \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\" returns successfully" Jul 14 21:51:31.630156 containerd[1433]: time="2025-07-14T21:51:31.629846225Z" level=info msg="RemovePodSandbox for \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\"" Jul 14 21:51:31.630156 containerd[1433]: time="2025-07-14T21:51:31.629878825Z" level=info msg="Forcibly stopping sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\"" Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.663 [WARNING][5827] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21056f12-8439-463a-a28c-d3964e6d90bc", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e50b49fb51f77c3ef4301022edece777960087becb016f880cb491887e4fecce", Pod:"coredns-668d6bf9bc-r2zgm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidbab93df282", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.664 [INFO][5827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.664 [INFO][5827] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" iface="eth0" netns="" Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.664 [INFO][5827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.664 [INFO][5827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.684 [INFO][5837] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" HandleID="k8s-pod-network.20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.684 [INFO][5837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.684 [INFO][5837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.692 [WARNING][5837] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" HandleID="k8s-pod-network.20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.692 [INFO][5837] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" HandleID="k8s-pod-network.20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Workload="localhost-k8s-coredns--668d6bf9bc--r2zgm-eth0" Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.693 [INFO][5837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.697064 containerd[1433]: 2025-07-14 21:51:31.695 [INFO][5827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e" Jul 14 21:51:31.697064 containerd[1433]: time="2025-07-14T21:51:31.697138394Z" level=info msg="TearDown network for sandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\" successfully" Jul 14 21:51:31.700510 containerd[1433]: time="2025-07-14T21:51:31.700472995Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:51:31.700676 containerd[1433]: time="2025-07-14T21:51:31.700656915Z" level=info msg="RemovePodSandbox \"20e64029a87663951c9e358e1515e1c43c4043117a425ae5f1406613d8451c7e\" returns successfully" Jul 14 21:51:31.701283 containerd[1433]: time="2025-07-14T21:51:31.701257395Z" level=info msg="StopPodSandbox for \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\"" Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.732 [WARNING][5854] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9n94f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6dffae0-199e-4860-b66d-240601db16b1", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d", Pod:"csi-node-driver-9n94f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f28f49a92f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.732 [INFO][5854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.732 [INFO][5854] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" iface="eth0" netns="" Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.732 [INFO][5854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.732 [INFO][5854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.750 [INFO][5863] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" HandleID="k8s-pod-network.68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.750 [INFO][5863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.750 [INFO][5863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.759 [WARNING][5863] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" HandleID="k8s-pod-network.68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.759 [INFO][5863] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" HandleID="k8s-pod-network.68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.761 [INFO][5863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.764422 containerd[1433]: 2025-07-14 21:51:31.762 [INFO][5854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:31.764998 containerd[1433]: time="2025-07-14T21:51:31.764439803Z" level=info msg="TearDown network for sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\" successfully" Jul 14 21:51:31.764998 containerd[1433]: time="2025-07-14T21:51:31.764466843Z" level=info msg="StopPodSandbox for \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\" returns successfully" Jul 14 21:51:31.764998 containerd[1433]: time="2025-07-14T21:51:31.764956003Z" level=info msg="RemovePodSandbox for \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\"" Jul 14 21:51:31.764998 containerd[1433]: time="2025-07-14T21:51:31.764986763Z" level=info msg="Forcibly stopping sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\"" Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.798 [WARNING][5882] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9n94f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c6dffae0-199e-4860-b66d-240601db16b1", ResourceVersion:"1102", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be5a94f3b42443042eca553db338455f9cfa0494f1ebfa338dde886df9c34c3d", Pod:"csi-node-driver-9n94f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4f28f49a92f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.798 [INFO][5882] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.798 [INFO][5882] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" iface="eth0" netns="" Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.798 [INFO][5882] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.798 [INFO][5882] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.816 [INFO][5890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" HandleID="k8s-pod-network.68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.816 [INFO][5890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.816 [INFO][5890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.825 [WARNING][5890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" HandleID="k8s-pod-network.68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.826 [INFO][5890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" HandleID="k8s-pod-network.68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Workload="localhost-k8s-csi--node--driver--9n94f-eth0" Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.828 [INFO][5890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.831982 containerd[1433]: 2025-07-14 21:51:31.830 [INFO][5882] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681" Jul 14 21:51:31.831982 containerd[1433]: time="2025-07-14T21:51:31.831957451Z" level=info msg="TearDown network for sandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\" successfully" Jul 14 21:51:31.835167 containerd[1433]: time="2025-07-14T21:51:31.835122012Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:51:31.835230 containerd[1433]: time="2025-07-14T21:51:31.835190492Z" level=info msg="RemovePodSandbox \"68e358038354ed74296fd3892aaa92f0ef64d1757d88ef42f949cba9be72e681\" returns successfully" Jul 14 21:51:31.836050 containerd[1433]: time="2025-07-14T21:51:31.835765732Z" level=info msg="StopPodSandbox for \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\"" Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.868 [WARNING][5910] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0", GenerateName:"calico-apiserver-5ddcff449d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddcff449d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9", Pod:"calico-apiserver-5ddcff449d-zw4fd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali819a31c2137", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.869 [INFO][5910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.869 [INFO][5910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" iface="eth0" netns="" Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.869 [INFO][5910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.869 [INFO][5910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.890 [INFO][5919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" HandleID="k8s-pod-network.b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.890 [INFO][5919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.890 [INFO][5919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.898 [WARNING][5919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" HandleID="k8s-pod-network.b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.898 [INFO][5919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" HandleID="k8s-pod-network.b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.899 [INFO][5919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.903487 containerd[1433]: 2025-07-14 21:51:31.901 [INFO][5910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:31.903920 containerd[1433]: time="2025-07-14T21:51:31.903525821Z" level=info msg="TearDown network for sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\" successfully" Jul 14 21:51:31.903920 containerd[1433]: time="2025-07-14T21:51:31.903551341Z" level=info msg="StopPodSandbox for \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\" returns successfully" Jul 14 21:51:31.904067 containerd[1433]: time="2025-07-14T21:51:31.904040501Z" level=info msg="RemovePodSandbox for \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\"" Jul 14 21:51:31.904103 containerd[1433]: time="2025-07-14T21:51:31.904075741Z" level=info msg="Forcibly stopping sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\"" Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.938 [WARNING][5937] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0", GenerateName:"calico-apiserver-5ddcff449d-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8d7425c-d46f-4d09-ac17-c7d1cb41b4a6", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddcff449d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d31989b44513aac391d71534f77a5d9d203718cb157f36fe11bad56eb7e0fe9", Pod:"calico-apiserver-5ddcff449d-zw4fd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali819a31c2137", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.938 [INFO][5937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.938 [INFO][5937] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" iface="eth0" netns="" Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.938 [INFO][5937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.938 [INFO][5937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.957 [INFO][5946] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" HandleID="k8s-pod-network.b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.957 [INFO][5946] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.957 [INFO][5946] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.965 [WARNING][5946] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" HandleID="k8s-pod-network.b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.965 [INFO][5946] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" HandleID="k8s-pod-network.b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Workload="localhost-k8s-calico--apiserver--5ddcff449d--zw4fd-eth0" Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.967 [INFO][5946] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:31.970623 containerd[1433]: 2025-07-14 21:51:31.968 [INFO][5937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c" Jul 14 21:51:31.971231 containerd[1433]: time="2025-07-14T21:51:31.970667429Z" level=info msg="TearDown network for sandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\" successfully" Jul 14 21:51:31.973492 containerd[1433]: time="2025-07-14T21:51:31.973460070Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:51:31.973581 containerd[1433]: time="2025-07-14T21:51:31.973520910Z" level=info msg="RemovePodSandbox \"b8a717c81b7ac25aa0d1f6d31b462b7f63a95d157b27e349d9272ff722cd712c\" returns successfully" Jul 14 21:51:31.974042 containerd[1433]: time="2025-07-14T21:51:31.974022150Z" level=info msg="StopPodSandbox for \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\"" Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.007 [WARNING][5964] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0", GenerateName:"calico-apiserver-5ddcff449d-", Namespace:"calico-apiserver", SelfLink:"", UID:"93233d1f-a8c9-4f31-8a33-73445cc9215c", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddcff449d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35", Pod:"calico-apiserver-5ddcff449d-6w6dr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf16b91e5e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.007 [INFO][5964] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.007 [INFO][5964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" iface="eth0" netns="" Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.007 [INFO][5964] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.007 [INFO][5964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.026 [INFO][5973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" HandleID="k8s-pod-network.4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.026 [INFO][5973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.026 [INFO][5973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.035 [WARNING][5973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" HandleID="k8s-pod-network.4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.035 [INFO][5973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" HandleID="k8s-pod-network.4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.036 [INFO][5973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:32.040363 containerd[1433]: 2025-07-14 21:51:32.038 [INFO][5964] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:32.040363 containerd[1433]: time="2025-07-14T21:51:32.040343598Z" level=info msg="TearDown network for sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\" successfully" Jul 14 21:51:32.040363 containerd[1433]: time="2025-07-14T21:51:32.040368478Z" level=info msg="StopPodSandbox for \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\" returns successfully" Jul 14 21:51:32.041041 containerd[1433]: time="2025-07-14T21:51:32.040907398Z" level=info msg="RemovePodSandbox for \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\"" Jul 14 21:51:32.041041 containerd[1433]: time="2025-07-14T21:51:32.040937918Z" level=info msg="Forcibly stopping sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\"" Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.072 [WARNING][5990] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0", GenerateName:"calico-apiserver-5ddcff449d-", Namespace:"calico-apiserver", SelfLink:"", UID:"93233d1f-a8c9-4f31-8a33-73445cc9215c", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 21, 50, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddcff449d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3219e02325be069209c9894ed31a51c6f5354c63c0022ff4a5444307c5271d35", Pod:"calico-apiserver-5ddcff449d-6w6dr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf16b91e5e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.072 [INFO][5990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.072 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" iface="eth0" netns="" Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.072 [INFO][5990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.072 [INFO][5990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.090 [INFO][5999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" HandleID="k8s-pod-network.4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.090 [INFO][5999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.090 [INFO][5999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.098 [WARNING][5999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" HandleID="k8s-pod-network.4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.098 [INFO][5999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" HandleID="k8s-pod-network.4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Workload="localhost-k8s-calico--apiserver--5ddcff449d--6w6dr-eth0" Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.100 [INFO][5999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 21:51:32.105207 containerd[1433]: 2025-07-14 21:51:32.102 [INFO][5990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a" Jul 14 21:51:32.105207 containerd[1433]: time="2025-07-14T21:51:32.104407686Z" level=info msg="TearDown network for sandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\" successfully" Jul 14 21:51:32.117085 containerd[1433]: time="2025-07-14T21:51:32.117040487Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 21:51:32.117252 containerd[1433]: time="2025-07-14T21:51:32.117232927Z" level=info msg="RemovePodSandbox \"4dae70ecaad5bc48d741d24d754c0ce491028be9e05ef22bf61e4dd01445132a\" returns successfully" Jul 14 21:51:35.326469 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:57090.service - OpenSSH per-connection server daemon (10.0.0.1:57090). Jul 14 21:51:35.382425 sshd[6035]: Accepted publickey for core from 10.0.0.1 port 57090 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:35.383952 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:35.392233 systemd-logind[1412]: New session 18 of user core. Jul 14 21:51:35.404739 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 21:51:35.609677 sshd[6035]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:35.615903 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:57090.service: Deactivated successfully. Jul 14 21:51:35.618597 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 21:51:35.620808 systemd-logind[1412]: Session 18 logged out. Waiting for processes to exit. Jul 14 21:51:35.622837 systemd-logind[1412]: Removed session 18. Jul 14 21:51:39.083844 kubelet[2442]: I0714 21:51:39.083382 2442 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 21:51:40.623294 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:57100.service - OpenSSH per-connection server daemon (10.0.0.1:57100). Jul 14 21:51:40.689774 sshd[6077]: Accepted publickey for core from 10.0.0.1 port 57100 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:40.691286 sshd[6077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:40.705630 systemd-logind[1412]: New session 19 of user core. Jul 14 21:51:40.719761 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 21:51:41.099702 sshd[6077]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:41.104847 systemd-logind[1412]: Session 19 logged out. Waiting for processes to exit. Jul 14 21:51:41.105680 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:57100.service: Deactivated successfully. Jul 14 21:51:41.110228 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 21:51:41.112228 systemd-logind[1412]: Removed session 19. Jul 14 21:51:46.113341 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:51338.service - OpenSSH per-connection server daemon (10.0.0.1:51338). Jul 14 21:51:46.161605 sshd[6123]: Accepted publickey for core from 10.0.0.1 port 51338 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:46.163814 sshd[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:46.167840 systemd-logind[1412]: New session 20 of user core. Jul 14 21:51:46.174735 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 21:51:46.402348 sshd[6123]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:46.407996 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:51338.service: Deactivated successfully. Jul 14 21:51:46.410533 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 21:51:46.412242 systemd-logind[1412]: Session 20 logged out. Waiting for processes to exit. Jul 14 21:51:46.413501 systemd-logind[1412]: Removed session 20. Jul 14 21:51:51.416406 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:51344.service - OpenSSH per-connection server daemon (10.0.0.1:51344). Jul 14 21:51:51.455290 sshd[6158]: Accepted publickey for core from 10.0.0.1 port 51344 ssh2: RSA SHA256:M1w9XMnl/I4XlZYWJshBUfaekchzCKWegQKD2Nlty/U Jul 14 21:51:51.457182 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 21:51:51.461145 systemd-logind[1412]: New session 21 of user core. Jul 14 21:51:51.472771 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 21:51:51.660543 sshd[6158]: pam_unix(sshd:session): session closed for user core Jul 14 21:51:51.666184 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:51344.service: Deactivated successfully. Jul 14 21:51:51.669462 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 21:51:51.670313 systemd-logind[1412]: Session 21 logged out. Waiting for processes to exit. Jul 14 21:51:51.671197 systemd-logind[1412]: Removed session 21. Jul 14 21:51:51.987622 kubelet[2442]: E0714 21:51:51.987593 2442 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"