Jul 10 00:28:41.894890 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 10 00:28:41.894911 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jul 9 22:54:34 -00 2025 Jul 10 00:28:41.894921 kernel: KASLR enabled Jul 10 00:28:41.894926 kernel: efi: EFI v2.7 by EDK II Jul 10 00:28:41.894932 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 10 00:28:41.894938 kernel: random: crng init done Jul 10 00:28:41.894945 kernel: ACPI: Early table checksum verification disabled Jul 10 00:28:41.894951 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 10 00:28:41.894957 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 10 00:28:41.894965 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:41.894971 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:41.894977 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:41.894983 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:41.894989 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:41.894996 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:41.895004 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:41.895010 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:41.895017 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 10 00:28:41.895023 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 10 00:28:41.895029 kernel: NUMA: Failed to initialise from firmware Jul 10 00:28:41.895036 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:28:41.895042 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 10 00:28:41.895049 kernel: Zone ranges: Jul 10 00:28:41.895055 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:28:41.895061 kernel: DMA32 empty Jul 10 00:28:41.895069 kernel: Normal empty Jul 10 00:28:41.895075 kernel: Movable zone start for each node Jul 10 00:28:41.895081 kernel: Early memory node ranges Jul 10 00:28:41.895088 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 10 00:28:41.895094 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 10 00:28:41.895100 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 10 00:28:41.895107 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 10 00:28:41.895113 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 10 00:28:41.895119 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 10 00:28:41.895126 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 10 00:28:41.895132 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 10 00:28:41.895138 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 10 00:28:41.895146 kernel: psci: probing for conduit method from ACPI. Jul 10 00:28:41.895153 kernel: psci: PSCIv1.1 detected in firmware. Jul 10 00:28:41.895159 kernel: psci: Using standard PSCI v0.2 function IDs Jul 10 00:28:41.895168 kernel: psci: Trusted OS migration not required Jul 10 00:28:41.895175 kernel: psci: SMC Calling Convention v1.1 Jul 10 00:28:41.895182 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 10 00:28:41.895190 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 10 00:28:41.895197 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 10 00:28:41.895204 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 10 00:28:41.895210 kernel: Detected PIPT I-cache on CPU0 Jul 10 00:28:41.895217 kernel: CPU features: detected: GIC system register CPU interface Jul 10 00:28:41.895224 kernel: CPU features: detected: Hardware dirty bit management Jul 10 00:28:41.895231 kernel: CPU features: detected: Spectre-v4 Jul 10 00:28:41.895237 kernel: CPU features: detected: Spectre-BHB Jul 10 00:28:41.895244 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 10 00:28:41.895251 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 10 00:28:41.895258 kernel: CPU features: detected: ARM erratum 1418040 Jul 10 00:28:41.895265 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 10 00:28:41.895272 kernel: alternatives: applying boot alternatives Jul 10 00:28:41.895280 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:28:41.895306 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 00:28:41.895314 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 10 00:28:41.895320 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 00:28:41.895327 kernel: Fallback order for Node 0: 0 Jul 10 00:28:41.895334 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 10 00:28:41.895341 kernel: Policy zone: DMA Jul 10 00:28:41.895347 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 00:28:41.895356 kernel: software IO TLB: area num 4. Jul 10 00:28:41.895363 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 10 00:28:41.895371 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Jul 10 00:28:41.895378 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 10 00:28:41.895384 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 10 00:28:41.895391 kernel: rcu: RCU event tracing is enabled. Jul 10 00:28:41.895398 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 10 00:28:41.895405 kernel: Trampoline variant of Tasks RCU enabled. Jul 10 00:28:41.895412 kernel: Tracing variant of Tasks RCU enabled. Jul 10 00:28:41.895419 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 00:28:41.895425 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 10 00:28:41.895432 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 10 00:28:41.895440 kernel: GICv3: 256 SPIs implemented Jul 10 00:28:41.895447 kernel: GICv3: 0 Extended SPIs implemented Jul 10 00:28:41.895454 kernel: Root IRQ handler: gic_handle_irq Jul 10 00:28:41.895460 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 10 00:28:41.895467 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 10 00:28:41.895474 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 10 00:28:41.895481 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 10 00:28:41.895488 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 10 00:28:41.895495 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 10 00:28:41.895506 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 10 00:28:41.895513 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 10 00:28:41.895521 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:28:41.895528 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 10 00:28:41.895535 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 10 00:28:41.895542 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 10 00:28:41.895548 kernel: arm-pv: using stolen time PV Jul 10 00:28:41.895555 kernel: Console: colour dummy device 80x25 Jul 10 00:28:41.895562 kernel: ACPI: Core revision 20230628 Jul 10 00:28:41.895570 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 10 00:28:41.895577 kernel: pid_max: default: 32768 minimum: 301 Jul 10 00:28:41.895583 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 10 00:28:41.895592 kernel: landlock: Up and running. Jul 10 00:28:41.895598 kernel: SELinux: Initializing. Jul 10 00:28:41.895605 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:28:41.895613 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 10 00:28:41.895620 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:28:41.895627 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 10 00:28:41.895634 kernel: rcu: Hierarchical SRCU implementation. Jul 10 00:28:41.895641 kernel: rcu: Max phase no-delay instances is 400. Jul 10 00:28:41.895648 kernel: Platform MSI: ITS@0x8080000 domain created Jul 10 00:28:41.895657 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 10 00:28:41.895663 kernel: Remapping and enabling EFI services. Jul 10 00:28:41.895670 kernel: smp: Bringing up secondary CPUs ... Jul 10 00:28:41.895677 kernel: Detected PIPT I-cache on CPU1 Jul 10 00:28:41.895684 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 10 00:28:41.895691 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 10 00:28:41.895698 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:28:41.895705 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 10 00:28:41.895712 kernel: Detected PIPT I-cache on CPU2 Jul 10 00:28:41.895719 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 10 00:28:41.895728 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 10 00:28:41.895735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:28:41.895747 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 10 00:28:41.895782 kernel: Detected PIPT I-cache on CPU3 Jul 10 00:28:41.895790 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 10 00:28:41.895797 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 10 00:28:41.895804 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 10 00:28:41.895811 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 10 00:28:41.895819 kernel: smp: Brought up 1 node, 4 CPUs Jul 10 00:28:41.895828 kernel: SMP: Total of 4 processors activated. Jul 10 00:28:41.895836 kernel: CPU features: detected: 32-bit EL0 Support Jul 10 00:28:41.895843 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 10 00:28:41.895850 kernel: CPU features: detected: Common not Private translations Jul 10 00:28:41.895858 kernel: CPU features: detected: CRC32 instructions Jul 10 00:28:41.895865 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 10 00:28:41.895872 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 10 00:28:41.895880 kernel: CPU features: detected: LSE atomic instructions Jul 10 00:28:41.895888 kernel: CPU features: detected: Privileged Access Never Jul 10 00:28:41.895895 kernel: CPU features: detected: RAS Extension Support Jul 10 00:28:41.895903 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 10 00:28:41.895910 kernel: CPU: All CPU(s) started at EL1 Jul 10 00:28:41.895917 kernel: alternatives: applying system-wide alternatives Jul 10 00:28:41.895924 kernel: devtmpfs: initialized Jul 10 00:28:41.895932 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 00:28:41.895939 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 10 00:28:41.895946 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 00:28:41.895955 kernel: SMBIOS 3.0.0 present. Jul 10 00:28:41.895962 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 10 00:28:41.895969 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 00:28:41.895977 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 10 00:28:41.895984 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 10 00:28:41.895992 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 10 00:28:41.895999 kernel: audit: initializing netlink subsys (disabled) Jul 10 00:28:41.896006 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Jul 10 00:28:41.896013 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 00:28:41.896022 kernel: cpuidle: using governor menu Jul 10 00:28:41.896030 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 10 00:28:41.896037 kernel: ASID allocator initialised with 32768 entries Jul 10 00:28:41.896044 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 00:28:41.896051 kernel: Serial: AMBA PL011 UART driver Jul 10 00:28:41.896072 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 10 00:28:41.896080 kernel: Modules: 0 pages in range for non-PLT usage Jul 10 00:28:41.896087 kernel: Modules: 509008 pages in range for PLT usage Jul 10 00:28:41.896094 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 00:28:41.896103 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 10 00:28:41.896110 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 10 00:28:41.896117 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 10 00:28:41.896125 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 00:28:41.896132 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 10 00:28:41.896139 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 10 00:28:41.896147 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 10 00:28:41.896154 kernel: ACPI: Added _OSI(Module Device) Jul 10 00:28:41.896162 kernel: ACPI: Added _OSI(Processor Device) Jul 10 00:28:41.896170 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 00:28:41.896178 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 00:28:41.896185 kernel: ACPI: Interpreter enabled Jul 10 00:28:41.896192 kernel: ACPI: Using GIC for interrupt routing Jul 10 00:28:41.896199 kernel: ACPI: MCFG table detected, 1 entries Jul 10 00:28:41.896207 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 10 00:28:41.896214 kernel: printk: console [ttyAMA0] enabled Jul 10 00:28:41.896221 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 10 00:28:41.896371 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 00:28:41.896447 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 10 00:28:41.896512 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 10 00:28:41.896575 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 10 00:28:41.896637 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 10 00:28:41.896647 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 10 00:28:41.896654 kernel: PCI host bridge to bus 0000:00 Jul 10 00:28:41.896722 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 10 00:28:41.896795 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 10 00:28:41.896855 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 10 00:28:41.896913 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 10 00:28:41.896992 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 10 00:28:41.897098 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 10 00:28:41.897166 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 10 00:28:41.897236 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 10 00:28:41.897319 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:28:41.897386 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 10 00:28:41.897450 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 10 00:28:41.897515 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 10 00:28:41.897574 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 10 00:28:41.897632 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 10 00:28:41.897694 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 10 00:28:41.897704 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 10 00:28:41.897711 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 10 00:28:41.897719 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 10 00:28:41.897726 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 10 00:28:41.897733 kernel: iommu: Default domain type: Translated Jul 10 00:28:41.897741 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 10 00:28:41.897748 kernel: efivars: Registered efivars operations Jul 10 00:28:41.897765 kernel: vgaarb: loaded Jul 10 00:28:41.897772 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 10 00:28:41.897780 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 00:28:41.897787 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 00:28:41.897794 kernel: pnp: PnP ACPI init Jul 10 00:28:41.897876 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 10 00:28:41.897887 kernel: pnp: PnP ACPI: found 1 devices Jul 10 00:28:41.897894 kernel: NET: Registered PF_INET protocol family Jul 10 00:28:41.897902 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 10 00:28:41.897912 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 10 00:28:41.897919 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 00:28:41.897927 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 00:28:41.897934 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 10 00:28:41.897942 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 10 00:28:41.897950 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:28:41.897957 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 10 00:28:41.897965 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 00:28:41.897974 kernel: PCI: CLS 0 bytes, default 64 Jul 10 00:28:41.897982 kernel: kvm [1]: HYP mode not available Jul 10 00:28:41.897989 kernel: Initialise system trusted keyrings Jul 10 00:28:41.897997 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 10 00:28:41.898004 kernel: Key type asymmetric registered Jul 10 00:28:41.898011 kernel: Asymmetric key parser 'x509' registered Jul 10 00:28:41.898018 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 10 00:28:41.898025 kernel: io scheduler mq-deadline registered Jul 10 00:28:41.898033 kernel: io scheduler kyber registered Jul 10 00:28:41.898040 kernel: io scheduler bfq registered Jul 10 00:28:41.898048 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 10 00:28:41.898056 kernel: ACPI: button: Power Button [PWRB] Jul 10 00:28:41.898063 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 10 00:28:41.898131 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 10 00:28:41.898141 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 00:28:41.898148 kernel: thunder_xcv, ver 1.0 Jul 10 00:28:41.898156 kernel: thunder_bgx, ver 1.0 Jul 10 00:28:41.898163 kernel: nicpf, ver 1.0 Jul 10 00:28:41.898170 kernel: nicvf, ver 1.0 Jul 10 00:28:41.898248 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 10 00:28:41.898327 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-10T00:28:41 UTC (1752107321) Jul 10 00:28:41.898338 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 10 00:28:41.898346 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 10 00:28:41.898354 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 10 00:28:41.898361 kernel: watchdog: Hard watchdog permanently disabled Jul 10 00:28:41.898371 kernel: NET: Registered PF_INET6 protocol family Jul 10 00:28:41.898379 kernel: Segment Routing with IPv6 Jul 10 00:28:41.898389 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 00:28:41.898399 kernel: NET: Registered PF_PACKET protocol family Jul 10 00:28:41.898409 kernel: Key type dns_resolver registered Jul 10 00:28:41.898416 kernel: registered taskstats version 1 Jul 10 00:28:41.898423 kernel: Loading compiled-in X.509 certificates Jul 10 00:28:41.898431 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 9cbc45ab00feb4acb0fa362a962909c99fb6ef52' Jul 10 00:28:41.898439 kernel: Key type .fscrypt registered Jul 10 00:28:41.898446 kernel: Key type fscrypt-provisioning registered Jul 10 00:28:41.898454 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 00:28:41.898464 kernel: ima: Allocated hash algorithm: sha1 Jul 10 00:28:41.898471 kernel: ima: No architecture policies found Jul 10 00:28:41.898479 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 10 00:28:41.898487 kernel: clk: Disabling unused clocks Jul 10 00:28:41.898495 kernel: Freeing unused kernel memory: 39424K Jul 10 00:28:41.898505 kernel: Run /init as init process Jul 10 00:28:41.898515 kernel: with arguments: Jul 10 00:28:41.898526 kernel: /init Jul 10 00:28:41.898533 kernel: with environment: Jul 10 00:28:41.898543 kernel: HOME=/ Jul 10 00:28:41.898550 kernel: TERM=linux Jul 10 00:28:41.898557 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 00:28:41.898573 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:28:41.898590 systemd[1]: Detected virtualization kvm. Jul 10 00:28:41.898599 systemd[1]: Detected architecture arm64. Jul 10 00:28:41.898607 systemd[1]: Running in initrd. Jul 10 00:28:41.898616 systemd[1]: No hostname configured, using default hostname. Jul 10 00:28:41.898624 systemd[1]: Hostname set to . Jul 10 00:28:41.898633 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:28:41.898642 systemd[1]: Queued start job for default target initrd.target. Jul 10 00:28:41.898650 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:28:41.898658 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:28:41.898668 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 10 00:28:41.898676 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:28:41.898686 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 10 00:28:41.898694 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 10 00:28:41.898703 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 10 00:28:41.898712 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 10 00:28:41.898720 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:28:41.898728 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:28:41.898736 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:28:41.898746 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:28:41.898759 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:28:41.898768 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:28:41.898776 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:28:41.898784 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:28:41.898792 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:28:41.898800 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 10 00:28:41.898808 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:28:41.898816 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:28:41.898826 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:28:41.898834 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:28:41.898842 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 10 00:28:41.898850 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:28:41.898858 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 10 00:28:41.898866 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 00:28:41.898873 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:28:41.898881 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:28:41.898891 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:28:41.898899 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 10 00:28:41.898907 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:28:41.898914 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 00:28:41.903948 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:28:41.903977 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:28:41.903988 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:28:41.904026 systemd-journald[237]: Collecting audit messages is disabled. Jul 10 00:28:41.904048 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:28:41.904056 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 00:28:41.904064 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:28:41.904072 kernel: Bridge firewalling registered Jul 10 00:28:41.904081 systemd-journald[237]: Journal started Jul 10 00:28:41.904102 systemd-journald[237]: Runtime Journal (/run/log/journal/bae9678fd81a4817bba073d25623a458) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:28:41.888253 systemd-modules-load[238]: Inserted module 'overlay' Jul 10 00:28:41.905633 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:28:41.904449 systemd-modules-load[238]: Inserted module 'br_netfilter' Jul 10 00:28:41.906479 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:28:41.916440 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:28:41.917776 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:28:41.919041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:28:41.920656 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:28:41.924631 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 10 00:28:41.926136 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:28:41.928260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:28:41.938411 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:28:41.940437 dracut-cmdline[272]: dracut-dracut-053 Jul 10 00:28:41.942868 dracut-cmdline[272]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2fac453dfc912247542078b0007b053dde4e47cc6ef808508492c36b6016a78f Jul 10 00:28:41.962708 systemd-resolved[278]: Positive Trust Anchors: Jul 10 00:28:41.962727 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:28:41.962766 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:28:41.967427 systemd-resolved[278]: Defaulting to hostname 'linux'. Jul 10 00:28:41.968311 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:28:41.970380 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:28:42.014312 kernel: SCSI subsystem initialized Jul 10 00:28:42.019304 kernel: Loading iSCSI transport class v2.0-870. Jul 10 00:28:42.026318 kernel: iscsi: registered transport (tcp) Jul 10 00:28:42.039313 kernel: iscsi: registered transport (qla4xxx) Jul 10 00:28:42.039347 kernel: QLogic iSCSI HBA Driver Jul 10 00:28:42.081349 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 10 00:28:42.091428 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 10 00:28:42.107458 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 00:28:42.110550 kernel: device-mapper: uevent: version 1.0.3 Jul 10 00:28:42.110567 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 10 00:28:42.158320 kernel: raid6: neonx8 gen() 15483 MB/s Jul 10 00:28:42.175312 kernel: raid6: neonx4 gen() 15272 MB/s Jul 10 00:28:42.192341 kernel: raid6: neonx2 gen() 12899 MB/s Jul 10 00:28:42.209318 kernel: raid6: neonx1 gen() 10242 MB/s Jul 10 00:28:42.226305 kernel: raid6: int64x8 gen() 6823 MB/s Jul 10 00:28:42.245796 kernel: raid6: int64x4 gen() 7176 MB/s Jul 10 00:28:42.260334 kernel: raid6: int64x2 gen() 6067 MB/s Jul 10 00:28:42.277334 kernel: raid6: int64x1 gen() 5034 MB/s Jul 10 00:28:42.277374 kernel: raid6: using algorithm neonx8 gen() 15483 MB/s Jul 10 00:28:42.294336 kernel: raid6: .... xor() 11895 MB/s, rmw enabled Jul 10 00:28:42.294374 kernel: raid6: using neon recovery algorithm Jul 10 00:28:42.299434 kernel: xor: measuring software checksum speed Jul 10 00:28:42.299472 kernel: 8regs : 19807 MB/sec Jul 10 00:28:42.300433 kernel: 32regs : 19646 MB/sec Jul 10 00:28:42.300469 kernel: arm64_neon : 26945 MB/sec Jul 10 00:28:42.300479 kernel: xor: using function: arm64_neon (26945 MB/sec) Jul 10 00:28:42.351325 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 10 00:28:42.362891 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:28:42.372495 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:28:42.383461 systemd-udevd[458]: Using default interface naming scheme 'v255'. Jul 10 00:28:42.386580 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:28:42.388890 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 10 00:28:42.403545 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Jul 10 00:28:42.428499 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:28:42.438436 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:28:42.478640 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:28:42.487448 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 10 00:28:42.500413 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 10 00:28:42.501888 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:28:42.504564 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:28:42.505434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:28:42.512431 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 10 00:28:42.520317 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 10 00:28:42.521847 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 10 00:28:42.527767 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 10 00:28:42.527785 kernel: GPT:9289727 != 19775487 Jul 10 00:28:42.525979 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:28:42.530313 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 10 00:28:42.536312 kernel: GPT:9289727 != 19775487 Jul 10 00:28:42.536589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:28:42.538245 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 10 00:28:42.538273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:28:42.536700 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:28:42.539824 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:28:42.540821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:28:42.540957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:28:42.542838 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:28:42.552558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:28:42.556316 kernel: BTRFS: device fsid e18a5201-bc0c-484b-ba1b-be3c0a720c32 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (502) Jul 10 00:28:42.558310 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (508) Jul 10 00:28:42.566328 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 10 00:28:42.568022 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:28:42.573251 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 10 00:28:42.579355 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 10 00:28:42.580203 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 10 00:28:42.585525 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:28:42.599500 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 10 00:28:42.601038 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 10 00:28:42.606812 disk-uuid[548]: Primary Header is updated. Jul 10 00:28:42.606812 disk-uuid[548]: Secondary Entries is updated. Jul 10 00:28:42.606812 disk-uuid[548]: Secondary Header is updated. Jul 10 00:28:42.609358 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:28:42.627313 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:28:42.627587 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:28:43.627314 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 10 00:28:43.628325 disk-uuid[549]: The operation has completed successfully. Jul 10 00:28:43.651823 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 00:28:43.651917 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 10 00:28:43.676441 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 10 00:28:43.679437 sh[573]: Success Jul 10 00:28:43.697958 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 10 00:28:43.737761 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 10 00:28:43.739469 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 10 00:28:43.740210 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 10 00:28:43.750745 kernel: BTRFS info (device dm-0): first mount of filesystem e18a5201-bc0c-484b-ba1b-be3c0a720c32 Jul 10 00:28:43.750792 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:28:43.750812 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 10 00:28:43.752751 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 10 00:28:43.752771 kernel: BTRFS info (device dm-0): using free space tree Jul 10 00:28:43.756363 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 10 00:28:43.757624 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 10 00:28:43.769431 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 10 00:28:43.770837 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 10 00:28:43.778950 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:28:43.778984 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:28:43.779722 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:28:43.782310 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:28:43.789993 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 00:28:43.791387 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:28:43.797366 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 10 00:28:43.802911 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 10 00:28:43.878334 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:28:43.887450 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:28:43.915375 systemd-networkd[764]: lo: Link UP Jul 10 00:28:43.915385 systemd-networkd[764]: lo: Gained carrier Jul 10 00:28:43.916106 systemd-networkd[764]: Enumeration completed Jul 10 00:28:43.916849 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:28:43.918232 systemd[1]: Reached target network.target - Network. Jul 10 00:28:43.918876 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:28:43.918879 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:28:43.919793 systemd-networkd[764]: eth0: Link UP Jul 10 00:28:43.919796 systemd-networkd[764]: eth0: Gained carrier Jul 10 00:28:43.919803 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:28:43.929463 ignition[658]: Ignition 2.19.0 Jul 10 00:28:43.929477 ignition[658]: Stage: fetch-offline Jul 10 00:28:43.929513 ignition[658]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:43.929521 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:43.929741 ignition[658]: parsed url from cmdline: "" Jul 10 00:28:43.929744 ignition[658]: no config URL provided Jul 10 00:28:43.929757 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 00:28:43.929765 ignition[658]: no config at "/usr/lib/ignition/user.ign" Jul 10 00:28:43.929789 ignition[658]: op(1): [started] loading QEMU firmware config module Jul 10 00:28:43.929793 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 10 00:28:43.938542 ignition[658]: op(1): [finished] loading QEMU firmware config module Jul 10 00:28:43.939325 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:28:43.977584 ignition[658]: parsing config with SHA512: daecb485a236b4f1220969075ff219ffbb43839f8f205ebfebb6543ede86932249627af81aa72cf92d8f41bf561077f76414b45c530055543378b178cb671f2a Jul 10 00:28:43.982109 unknown[658]: fetched base config from "system" Jul 10 00:28:43.982119 unknown[658]: fetched user config from "qemu" Jul 10 00:28:43.982971 ignition[658]: fetch-offline: fetch-offline passed Jul 10 00:28:43.983995 ignition[658]: Ignition finished successfully Jul 10 00:28:43.985058 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:28:43.986630 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 00:28:43.995473 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 10 00:28:44.007007 ignition[770]: Ignition 2.19.0 Jul 10 00:28:44.007018 ignition[770]: Stage: kargs Jul 10 00:28:44.007197 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:44.007208 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:44.008278 ignition[770]: kargs: kargs passed Jul 10 00:28:44.008347 ignition[770]: Ignition finished successfully Jul 10 00:28:44.011007 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 10 00:28:44.019455 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 10 00:28:44.030198 ignition[779]: Ignition 2.19.0 Jul 10 00:28:44.030208 ignition[779]: Stage: disks Jul 10 00:28:44.030395 ignition[779]: no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:44.030405 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:44.031277 ignition[779]: disks: disks passed Jul 10 00:28:44.031362 ignition[779]: Ignition finished successfully Jul 10 00:28:44.033527 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 10 00:28:44.034552 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 10 00:28:44.035724 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:28:44.037170 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:28:44.038617 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:28:44.039886 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:28:44.053430 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 10 00:28:44.063681 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 10 00:28:44.067890 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 10 00:28:44.069780 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 10 00:28:44.117167 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 10 00:28:44.118370 kernel: EXT4-fs (vda9): mounted filesystem c566fdd5-af6f-4008-858c-a2aed765f9b4 r/w with ordered data mode. Quota mode: none. Jul 10 00:28:44.118279 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 10 00:28:44.126390 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:28:44.127865 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 10 00:28:44.129016 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 10 00:28:44.129054 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 00:28:44.135923 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (798) Jul 10 00:28:44.135946 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:28:44.135956 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:28:44.135966 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:28:44.129078 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:28:44.135863 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 10 00:28:44.138823 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 10 00:28:44.140626 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:28:44.141263 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:28:44.186241 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 00:28:44.189481 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jul 10 00:28:44.193611 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 00:28:44.196642 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 00:28:44.267360 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 10 00:28:44.275372 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 10 00:28:44.276788 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 10 00:28:44.282299 kernel: BTRFS info (device vda6): last unmount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:28:44.296888 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 10 00:28:44.300542 ignition[911]: INFO : Ignition 2.19.0 Jul 10 00:28:44.300542 ignition[911]: INFO : Stage: mount Jul 10 00:28:44.301765 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:44.301765 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:44.301765 ignition[911]: INFO : mount: mount passed Jul 10 00:28:44.301765 ignition[911]: INFO : Ignition finished successfully Jul 10 00:28:44.303139 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 10 00:28:44.313405 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 10 00:28:44.750131 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 10 00:28:44.761470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 10 00:28:44.768680 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (925) Jul 10 00:28:44.768724 kernel: BTRFS info (device vda6): first mount of filesystem 8ce7827a-be35-4e5a-9c5c-f9bfd6370ac0 Jul 10 00:28:44.768735 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 10 00:28:44.769405 kernel: BTRFS info (device vda6): using free space tree Jul 10 00:28:44.772296 kernel: BTRFS info (device vda6): auto enabling async discard Jul 10 00:28:44.773079 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 10 00:28:44.794609 ignition[942]: INFO : Ignition 2.19.0 Jul 10 00:28:44.794609 ignition[942]: INFO : Stage: files Jul 10 00:28:44.795938 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:44.795938 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:44.795938 ignition[942]: DEBUG : files: compiled without relabeling support, skipping Jul 10 00:28:44.798538 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 00:28:44.798538 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 00:28:44.801630 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 00:28:44.802701 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 00:28:44.802701 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 00:28:44.802183 unknown[942]: wrote ssh authorized keys file for user: core Jul 10 00:28:44.805514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:28:44.805514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 00:28:44.805514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 10 00:28:44.805514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 10 00:28:44.938726 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 00:28:45.201488 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 10 00:28:45.203373 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 00:28:45.203373 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 00:28:45.203373 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:28:45.203373 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 00:28:45.203373 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:28:45.203373 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 00:28:45.203373 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:28:45.203373 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 00:28:45.203373 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:28:45.217338 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 00:28:45.217338 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:28:45.217338 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:28:45.217338 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:28:45.217338 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 10 00:28:45.398267 systemd-networkd[764]: eth0: Gained IPv6LL Jul 10 00:28:45.662498 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 10 00:28:46.174218 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 10 00:28:46.174218 ignition[942]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 10 00:28:46.177043 ignition[942]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 00:28:46.204426 ignition[942]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:28:46.208618 ignition[942]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 00:28:46.209848 ignition[942]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 00:28:46.209848 ignition[942]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 10 00:28:46.209848 ignition[942]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 00:28:46.209848 ignition[942]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:28:46.209848 ignition[942]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 00:28:46.209848 ignition[942]: INFO : files: files passed Jul 10 00:28:46.209848 ignition[942]: INFO : Ignition finished successfully Jul 10 00:28:46.211329 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 10 00:28:46.219504 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 10 00:28:46.222685 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 10 00:28:46.224490 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 00:28:46.224576 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 10 00:28:46.229497 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Jul 10 00:28:46.232445 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:28:46.232445 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:28:46.234665 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 00:28:46.234494 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:28:46.235964 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 10 00:28:46.247537 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 10 00:28:46.267352 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 00:28:46.267480 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 10 00:28:46.269260 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 10 00:28:46.270704 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 10 00:28:46.272115 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 10 00:28:46.273023 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 10 00:28:46.290236 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:28:46.303483 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 10 00:28:46.313150 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:28:46.314237 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:28:46.315817 systemd[1]: Stopped target timers.target - Timer Units. Jul 10 00:28:46.317128 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 00:28:46.317260 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 10 00:28:46.319173 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 10 00:28:46.320690 systemd[1]: Stopped target basic.target - Basic System. Jul 10 00:28:46.322039 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 10 00:28:46.323424 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 10 00:28:46.324941 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 10 00:28:46.326403 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 10 00:28:46.327922 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 10 00:28:46.329559 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 10 00:28:46.331219 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 10 00:28:46.332787 systemd[1]: Stopped target swap.target - Swaps. Jul 10 00:28:46.334021 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 00:28:46.334153 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 10 00:28:46.335961 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:28:46.337434 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:28:46.338913 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 10 00:28:46.342357 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:28:46.343328 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 00:28:46.343444 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 10 00:28:46.345543 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 00:28:46.345657 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 10 00:28:46.347267 systemd[1]: Stopped target paths.target - Path Units. Jul 10 00:28:46.348455 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 00:28:46.349360 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:28:46.350721 systemd[1]: Stopped target slices.target - Slice Units. Jul 10 00:28:46.352138 systemd[1]: Stopped target sockets.target - Socket Units. Jul 10 00:28:46.353854 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 00:28:46.353940 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 10 00:28:46.355173 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 00:28:46.355253 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 10 00:28:46.356580 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 00:28:46.356700 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 10 00:28:46.358031 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 00:28:46.358135 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 10 00:28:46.368486 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 10 00:28:46.369997 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 10 00:28:46.370887 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 00:28:46.371006 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:28:46.373342 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 00:28:46.373449 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 10 00:28:46.379179 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 00:28:46.379273 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 10 00:28:46.381446 ignition[996]: INFO : Ignition 2.19.0 Jul 10 00:28:46.381446 ignition[996]: INFO : Stage: umount Jul 10 00:28:46.381446 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 10 00:28:46.381446 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 10 00:28:46.381446 ignition[996]: INFO : umount: umount passed Jul 10 00:28:46.381446 ignition[996]: INFO : Ignition finished successfully Jul 10 00:28:46.384712 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 00:28:46.385279 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 00:28:46.387329 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 10 00:28:46.389264 systemd[1]: Stopped target network.target - Network. Jul 10 00:28:46.391005 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 00:28:46.391102 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 10 00:28:46.392356 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 00:28:46.392399 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 10 00:28:46.393767 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 00:28:46.393811 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 10 00:28:46.395051 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 10 00:28:46.395090 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 10 00:28:46.396721 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 10 00:28:46.398159 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 10 00:28:46.399686 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 00:28:46.399788 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 10 00:28:46.401316 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 00:28:46.401491 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 10 00:28:46.405809 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 00:28:46.405914 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 10 00:28:46.406337 systemd-networkd[764]: eth0: DHCPv6 lease lost Jul 10 00:28:46.408514 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 00:28:46.408619 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 10 00:28:46.411876 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 00:28:46.411908 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:28:46.419494 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 10 00:28:46.420232 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 00:28:46.420315 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 10 00:28:46.421984 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 00:28:46.422026 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:28:46.423461 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 00:28:46.423503 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 10 00:28:46.425129 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 10 00:28:46.425166 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:28:46.426758 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:28:46.436101 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 00:28:46.436209 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 10 00:28:46.447926 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 00:28:46.448071 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:28:46.449888 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 00:28:46.449927 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 10 00:28:46.451097 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 00:28:46.451126 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:28:46.452406 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 00:28:46.452447 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 10 00:28:46.454404 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 00:28:46.454445 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 10 00:28:46.456358 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 00:28:46.456395 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 10 00:28:46.466483 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 10 00:28:46.467275 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 10 00:28:46.467346 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:28:46.468982 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 10 00:28:46.469024 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:28:46.470534 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 00:28:46.470570 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:28:46.472176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 00:28:46.472233 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:28:46.473989 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 00:28:46.474088 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 10 00:28:46.475847 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 10 00:28:46.477588 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 10 00:28:46.487844 systemd[1]: Switching root. Jul 10 00:28:46.521009 systemd-journald[237]: Journal stopped Jul 10 00:28:47.236046 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Jul 10 00:28:47.236101 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 00:28:47.236115 kernel: SELinux: policy capability open_perms=1 Jul 10 00:28:47.236125 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 00:28:47.236138 kernel: SELinux: policy capability always_check_network=0 Jul 10 00:28:47.236149 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 00:28:47.236158 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 00:28:47.236168 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 00:28:47.236177 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 00:28:47.236187 kernel: audit: type=1403 audit(1752107326.697:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 10 00:28:47.236197 systemd[1]: Successfully loaded SELinux policy in 31.012ms. Jul 10 00:28:47.236217 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.193ms. Jul 10 00:28:47.236230 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 10 00:28:47.236242 systemd[1]: Detected virtualization kvm. Jul 10 00:28:47.236254 systemd[1]: Detected architecture arm64. Jul 10 00:28:47.236265 systemd[1]: Detected first boot. Jul 10 00:28:47.236275 systemd[1]: Initializing machine ID from VM UUID. Jul 10 00:28:47.236313 zram_generator::config[1059]: No configuration found. Jul 10 00:28:47.236326 systemd[1]: Populated /etc with preset unit settings. Jul 10 00:28:47.236337 systemd[1]: Queued start job for default target multi-user.target. Jul 10 00:28:47.236347 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 10 00:28:47.236361 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 10 00:28:47.236372 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 10 00:28:47.236389 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 10 00:28:47.236399 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 10 00:28:47.236411 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 10 00:28:47.236422 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 10 00:28:47.236433 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 10 00:28:47.236444 systemd[1]: Created slice user.slice - User and Session Slice. Jul 10 00:28:47.236455 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 10 00:28:47.236467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 10 00:28:47.236478 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 10 00:28:47.236489 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 10 00:28:47.236500 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 10 00:28:47.236511 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 10 00:28:47.236522 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 10 00:28:47.236533 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 10 00:28:47.236543 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 10 00:28:47.236554 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 10 00:28:47.236566 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 10 00:28:47.236577 systemd[1]: Reached target slices.target - Slice Units. Jul 10 00:28:47.236588 systemd[1]: Reached target swap.target - Swaps. Jul 10 00:28:47.236599 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 10 00:28:47.236609 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 10 00:28:47.236622 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 10 00:28:47.236632 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 10 00:28:47.236644 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 10 00:28:47.236656 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 10 00:28:47.236667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 10 00:28:47.236678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 10 00:28:47.236689 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 10 00:28:47.236699 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 10 00:28:47.236709 systemd[1]: Mounting media.mount - External Media Directory... Jul 10 00:28:47.236720 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 10 00:28:47.236731 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 10 00:28:47.236748 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 10 00:28:47.236764 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 10 00:28:47.236775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:28:47.236790 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 10 00:28:47.236801 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 10 00:28:47.236811 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:28:47.236822 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:28:47.236833 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:28:47.236843 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 10 00:28:47.236855 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:28:47.236867 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 00:28:47.236878 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 10 00:28:47.236890 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 10 00:28:47.236901 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 10 00:28:47.236911 kernel: loop: module loaded Jul 10 00:28:47.236922 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 10 00:28:47.236932 kernel: fuse: init (API version 7.39) Jul 10 00:28:47.236942 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 10 00:28:47.236954 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 10 00:28:47.236966 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 10 00:28:47.236977 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 10 00:28:47.236988 kernel: ACPI: bus type drm_connector registered Jul 10 00:28:47.236998 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 10 00:28:47.237009 systemd[1]: Mounted media.mount - External Media Directory. Jul 10 00:28:47.237020 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 10 00:28:47.237030 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 10 00:28:47.237041 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 10 00:28:47.237054 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 10 00:28:47.237082 systemd-journald[1141]: Collecting audit messages is disabled. Jul 10 00:28:47.237104 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 00:28:47.237115 systemd-journald[1141]: Journal started Jul 10 00:28:47.237139 systemd-journald[1141]: Runtime Journal (/run/log/journal/bae9678fd81a4817bba073d25623a458) is 5.9M, max 47.3M, 41.4M free. Jul 10 00:28:47.237174 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 10 00:28:47.240300 systemd[1]: Started systemd-journald.service - Journal Service. Jul 10 00:28:47.241234 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 10 00:28:47.242365 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:28:47.242521 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:28:47.243566 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:28:47.243717 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:28:47.244797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:28:47.244945 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:28:47.246054 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 00:28:47.246211 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 10 00:28:47.247346 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:28:47.247539 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:28:47.248611 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 10 00:28:47.250036 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 10 00:28:47.251232 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 10 00:28:47.262520 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 10 00:28:47.273388 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 10 00:28:47.275154 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 10 00:28:47.276022 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 00:28:47.278488 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 10 00:28:47.280241 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 10 00:28:47.281262 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:28:47.282498 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 10 00:28:47.283328 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:28:47.285593 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 10 00:28:47.289435 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 10 00:28:47.292051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 10 00:28:47.293175 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 10 00:28:47.294162 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 10 00:28:47.295308 systemd-journald[1141]: Time spent on flushing to /var/log/journal/bae9678fd81a4817bba073d25623a458 is 19.761ms for 848 entries. Jul 10 00:28:47.295308 systemd-journald[1141]: System Journal (/var/log/journal/bae9678fd81a4817bba073d25623a458) is 8.0M, max 195.6M, 187.6M free. Jul 10 00:28:47.321371 systemd-journald[1141]: Received client request to flush runtime journal. Jul 10 00:28:47.295486 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 10 00:28:47.311815 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 10 00:28:47.313544 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 10 00:28:47.316454 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 10 00:28:47.327566 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 10 00:28:47.330594 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 10 00:28:47.331317 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 10 00:28:47.331336 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Jul 10 00:28:47.335411 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 10 00:28:47.343525 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 10 00:28:47.363241 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 10 00:28:47.377471 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 10 00:28:47.389078 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jul 10 00:28:47.389098 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jul 10 00:28:47.392770 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 10 00:28:47.714014 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 10 00:28:47.727535 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 10 00:28:47.746362 systemd-udevd[1222]: Using default interface naming scheme 'v255'. Jul 10 00:28:47.758762 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 10 00:28:47.771844 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 10 00:28:47.787489 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 10 00:28:47.794913 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Jul 10 00:28:47.824586 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1232) Jul 10 00:28:47.830476 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 10 00:28:47.844332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 10 00:28:47.893398 systemd-networkd[1229]: lo: Link UP Jul 10 00:28:47.893413 systemd-networkd[1229]: lo: Gained carrier Jul 10 00:28:47.894088 systemd-networkd[1229]: Enumeration completed Jul 10 00:28:47.894526 systemd-networkd[1229]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:28:47.894529 systemd-networkd[1229]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 10 00:28:47.895136 systemd-networkd[1229]: eth0: Link UP Jul 10 00:28:47.895141 systemd-networkd[1229]: eth0: Gained carrier Jul 10 00:28:47.895152 systemd-networkd[1229]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 10 00:28:47.896526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 10 00:28:47.897499 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 10 00:28:47.899849 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 10 00:28:47.909784 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 10 00:28:47.912326 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 10 00:28:47.913361 systemd-networkd[1229]: eth0: DHCPv4 address 10.0.0.65/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 10 00:28:47.934446 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:28:47.941707 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 10 00:28:47.963917 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 10 00:28:47.965111 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 10 00:28:47.977545 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 10 00:28:47.982185 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 00:28:48.006875 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 10 00:28:48.008041 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 10 00:28:48.009000 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 00:28:48.009031 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 10 00:28:48.009788 systemd[1]: Reached target machines.target - Containers. Jul 10 00:28:48.011783 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 10 00:28:48.019429 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 10 00:28:48.021402 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 10 00:28:48.022389 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:28:48.023350 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 10 00:28:48.026232 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 10 00:28:48.030450 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 10 00:28:48.032065 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 10 00:28:48.046800 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 10 00:28:48.048310 kernel: loop0: detected capacity change from 0 to 114432 Jul 10 00:28:48.051363 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 00:28:48.052824 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 10 00:28:48.061319 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 00:28:48.097316 kernel: loop1: detected capacity change from 0 to 114328 Jul 10 00:28:48.136314 kernel: loop2: detected capacity change from 0 to 203944 Jul 10 00:28:48.172312 kernel: loop3: detected capacity change from 0 to 114432 Jul 10 00:28:48.177321 kernel: loop4: detected capacity change from 0 to 114328 Jul 10 00:28:48.183309 kernel: loop5: detected capacity change from 0 to 203944 Jul 10 00:28:48.186292 (sd-merge)[1288]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 10 00:28:48.186684 (sd-merge)[1288]: Merged extensions into '/usr'. Jul 10 00:28:48.190212 systemd[1]: Reloading requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Jul 10 00:28:48.190228 systemd[1]: Reloading... Jul 10 00:28:48.235381 zram_generator::config[1321]: No configuration found. Jul 10 00:28:48.272030 ldconfig[1272]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 00:28:48.330561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:28:48.373235 systemd[1]: Reloading finished in 182 ms. Jul 10 00:28:48.386977 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 10 00:28:48.388437 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 10 00:28:48.406447 systemd[1]: Starting ensure-sysext.service... Jul 10 00:28:48.408376 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 10 00:28:48.412660 systemd[1]: Reloading requested from client PID 1360 ('systemctl') (unit ensure-sysext.service)... Jul 10 00:28:48.412755 systemd[1]: Reloading... Jul 10 00:28:48.424916 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 00:28:48.425185 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 10 00:28:48.425848 systemd-tmpfiles[1361]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 00:28:48.426068 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jul 10 00:28:48.426123 systemd-tmpfiles[1361]: ACLs are not supported, ignoring. Jul 10 00:28:48.428476 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:28:48.428488 systemd-tmpfiles[1361]: Skipping /boot Jul 10 00:28:48.435596 systemd-tmpfiles[1361]: Detected autofs mount point /boot during canonicalization of boot. Jul 10 00:28:48.435611 systemd-tmpfiles[1361]: Skipping /boot Jul 10 00:28:48.453524 zram_generator::config[1390]: No configuration found. Jul 10 00:28:48.543349 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:28:48.585455 systemd[1]: Reloading finished in 172 ms. Jul 10 00:28:48.601946 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 10 00:28:48.617088 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:28:48.619642 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 10 00:28:48.622128 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 10 00:28:48.629915 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 10 00:28:48.633387 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 10 00:28:48.650964 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:28:48.663264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:28:48.667426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:28:48.672555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:28:48.673482 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:28:48.674530 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 10 00:28:48.676720 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 10 00:28:48.679036 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 10 00:28:48.680524 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:28:48.680674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:28:48.682172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:28:48.682331 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:28:48.683723 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:28:48.683939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:28:48.690132 augenrules[1464]: No rules Jul 10 00:28:48.691919 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:28:48.694109 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:28:48.709520 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:28:48.711443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:28:48.714605 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:28:48.715463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:28:48.719667 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 10 00:28:48.720547 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:28:48.722062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:28:48.722222 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:28:48.723776 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:28:48.723956 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:28:48.724479 systemd-resolved[1436]: Positive Trust Anchors: Jul 10 00:28:48.724497 systemd-resolved[1436]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 00:28:48.724529 systemd-resolved[1436]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 10 00:28:48.725402 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:28:48.725599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:28:48.730482 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 10 00:28:48.732435 systemd-resolved[1436]: Defaulting to hostname 'linux'. Jul 10 00:28:48.733961 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 10 00:28:48.743449 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 10 00:28:48.745261 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 10 00:28:48.746980 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 10 00:28:48.748888 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 10 00:28:48.749799 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 10 00:28:48.749857 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 00:28:48.750052 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 10 00:28:48.751473 systemd[1]: Finished ensure-sysext.service. Jul 10 00:28:48.752499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 00:28:48.752641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 10 00:28:48.753810 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 00:28:48.753939 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 10 00:28:48.755120 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 00:28:48.755250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 10 00:28:48.756471 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 00:28:48.756661 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 10 00:28:48.761886 systemd[1]: Reached target network.target - Network. Jul 10 00:28:48.762723 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 10 00:28:48.763629 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 00:28:48.763698 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 10 00:28:48.772498 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 10 00:28:48.817618 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 10 00:28:48.331025 systemd[1]: Reached target sysinit.target - System Initialization. Jul 10 00:28:48.341240 systemd-journald[1141]: Time jumped backwards, rotating. Jul 10 00:28:48.331083 systemd-timesyncd[1504]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 10 00:28:48.331113 systemd-resolved[1436]: Clock change detected. Flushing caches. Jul 10 00:28:48.331123 systemd-timesyncd[1504]: Initial clock synchronization to Thu 2025-07-10 00:28:48.330985 UTC. Jul 10 00:28:48.332466 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 10 00:28:48.333517 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 10 00:28:48.334552 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 10 00:28:48.335527 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 00:28:48.335551 systemd[1]: Reached target paths.target - Path Units. Jul 10 00:28:48.336333 systemd[1]: Reached target time-set.target - System Time Set. Jul 10 00:28:48.337321 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 10 00:28:48.338476 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 10 00:28:48.339386 systemd[1]: Reached target timers.target - Timer Units. Jul 10 00:28:48.340778 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 10 00:28:48.343004 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 10 00:28:48.345747 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 10 00:28:48.350383 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 10 00:28:48.351167 systemd[1]: Reached target sockets.target - Socket Units. Jul 10 00:28:48.351895 systemd[1]: Reached target basic.target - Basic System. Jul 10 00:28:48.352770 systemd[1]: System is tainted: cgroupsv1 Jul 10 00:28:48.352827 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:28:48.352850 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 10 00:28:48.353946 systemd[1]: Starting containerd.service - containerd container runtime... Jul 10 00:28:48.355767 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 10 00:28:48.357543 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 10 00:28:48.361593 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 10 00:28:48.362555 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 10 00:28:48.365211 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 10 00:28:48.368474 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 10 00:28:48.369313 jq[1511]: false Jul 10 00:28:48.372537 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 10 00:28:48.379483 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 10 00:28:48.384552 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 10 00:28:48.388165 extend-filesystems[1513]: Found loop3 Jul 10 00:28:48.388165 extend-filesystems[1513]: Found loop4 Jul 10 00:28:48.388165 extend-filesystems[1513]: Found loop5 Jul 10 00:28:48.388165 extend-filesystems[1513]: Found vda Jul 10 00:28:48.388165 extend-filesystems[1513]: Found vda1 Jul 10 00:28:48.388165 extend-filesystems[1513]: Found vda2 Jul 10 00:28:48.388165 extend-filesystems[1513]: Found vda3 Jul 10 00:28:48.388165 extend-filesystems[1513]: Found usr Jul 10 00:28:48.388165 extend-filesystems[1513]: Found vda4 Jul 10 00:28:48.388165 extend-filesystems[1513]: Found vda6 Jul 10 00:28:48.388165 extend-filesystems[1513]: Found vda7 Jul 10 00:28:48.388165 extend-filesystems[1513]: Found vda9 Jul 10 00:28:48.388165 extend-filesystems[1513]: Checking size of /dev/vda9 Jul 10 00:28:48.388226 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 00:28:48.398945 dbus-daemon[1510]: [system] SELinux support is enabled Jul 10 00:28:48.396138 systemd[1]: Starting update-engine.service - Update Engine... Jul 10 00:28:48.403397 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 10 00:28:48.404300 extend-filesystems[1513]: Resized partition /dev/vda9 Jul 10 00:28:48.404894 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 10 00:28:48.417192 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1239) Jul 10 00:28:48.415516 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 00:28:48.417293 extend-filesystems[1537]: resize2fs 1.47.1 (20-May-2024) Jul 10 00:28:48.418185 jq[1535]: true Jul 10 00:28:48.415752 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 10 00:28:48.416039 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 00:28:48.416234 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 10 00:28:48.420095 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 00:28:48.423410 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 10 00:28:48.426381 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 10 00:28:48.447882 jq[1543]: true Jul 10 00:28:48.449111 (ntainerd)[1551]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 10 00:28:48.463796 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 00:28:48.463841 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 10 00:28:48.465380 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 10 00:28:48.466011 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 00:28:48.466039 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 10 00:28:48.471555 systemd[1]: Started update-engine.service - Update Engine. Jul 10 00:28:48.497244 tar[1541]: linux-arm64/helm Jul 10 00:28:48.497520 update_engine[1530]: I20250710 00:28:48.469189 1530 main.cc:92] Flatcar Update Engine starting Jul 10 00:28:48.497520 update_engine[1530]: I20250710 00:28:48.471588 1530 update_check_scheduler.cc:74] Next update check in 8m45s Jul 10 00:28:48.475246 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 10 00:28:48.486663 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 10 00:28:48.497436 systemd-logind[1524]: Watching system buttons on /dev/input/event0 (Power Button) Jul 10 00:28:48.497632 systemd-logind[1524]: New seat seat0. Jul 10 00:28:48.498427 systemd[1]: Started systemd-logind.service - User Login Management. Jul 10 00:28:48.506045 extend-filesystems[1537]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 10 00:28:48.506045 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 10 00:28:48.506045 extend-filesystems[1537]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 10 00:28:48.509780 extend-filesystems[1513]: Resized filesystem in /dev/vda9 Jul 10 00:28:48.506091 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 00:28:48.506332 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 10 00:28:48.525748 bash[1571]: Updated "/home/core/.ssh/authorized_keys" Jul 10 00:28:48.526501 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 10 00:28:48.529059 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 10 00:28:48.535883 locksmithd[1564]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 00:28:48.648940 containerd[1551]: time="2025-07-10T00:28:48.648540538Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 10 00:28:48.676548 containerd[1551]: time="2025-07-10T00:28:48.676485338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:48.678111 containerd[1551]: time="2025-07-10T00:28:48.678051618Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:28:48.678111 containerd[1551]: time="2025-07-10T00:28:48.678092098Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 00:28:48.678111 containerd[1551]: time="2025-07-10T00:28:48.678112218Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 00:28:48.678410 containerd[1551]: time="2025-07-10T00:28:48.678388538Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 10 00:28:48.678437 containerd[1551]: time="2025-07-10T00:28:48.678414978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:48.678553 containerd[1551]: time="2025-07-10T00:28:48.678533498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:28:48.678577 containerd[1551]: time="2025-07-10T00:28:48.678556818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:48.678910 containerd[1551]: time="2025-07-10T00:28:48.678874498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:28:48.678910 containerd[1551]: time="2025-07-10T00:28:48.678900978Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:48.678951 containerd[1551]: time="2025-07-10T00:28:48.678922218Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:28:48.678951 containerd[1551]: time="2025-07-10T00:28:48.678933338Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:48.679040 containerd[1551]: time="2025-07-10T00:28:48.679024018Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:48.679231 containerd[1551]: time="2025-07-10T00:28:48.679214738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 00:28:48.679383 containerd[1551]: time="2025-07-10T00:28:48.679348018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 00:28:48.679403 containerd[1551]: time="2025-07-10T00:28:48.679384538Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 00:28:48.679544 containerd[1551]: time="2025-07-10T00:28:48.679460058Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 00:28:48.679544 containerd[1551]: time="2025-07-10T00:28:48.679504138Z" level=info msg="metadata content store policy set" policy=shared Jul 10 00:28:48.683535 systemd-networkd[1229]: eth0: Gained IPv6LL Jul 10 00:28:48.688518 containerd[1551]: time="2025-07-10T00:28:48.688485538Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 00:28:48.688768 containerd[1551]: time="2025-07-10T00:28:48.688639658Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 00:28:48.688768 containerd[1551]: time="2025-07-10T00:28:48.688660818Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 10 00:28:48.688963 containerd[1551]: time="2025-07-10T00:28:48.688946418Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 10 00:28:48.689124 containerd[1551]: time="2025-07-10T00:28:48.689105858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 00:28:48.689295 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 10 00:28:48.689698 containerd[1551]: time="2025-07-10T00:28:48.689579978Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 00:28:48.690673 systemd[1]: Reached target network-online.target - Network is Online. Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.690679538Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.690863058Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.690881738Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.690894858Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.690911538Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.690925298Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.690941178Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.690955738Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.690999058Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.691012778Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.691026338Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.691038218Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.691058978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692473 containerd[1551]: time="2025-07-10T00:28:48.691073178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691089378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691102498Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691115738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691132338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691145178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691159058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691172298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691187098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691199578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691213618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691226298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691246218Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691267538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691280618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692716 containerd[1551]: time="2025-07-10T00:28:48.691291298Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 00:28:48.692959 containerd[1551]: time="2025-07-10T00:28:48.691414698Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 00:28:48.692959 containerd[1551]: time="2025-07-10T00:28:48.691434458Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 10 00:28:48.692959 containerd[1551]: time="2025-07-10T00:28:48.691447098Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 00:28:48.692959 containerd[1551]: time="2025-07-10T00:28:48.691458858Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 10 00:28:48.692959 containerd[1551]: time="2025-07-10T00:28:48.691469258Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.692959 containerd[1551]: time="2025-07-10T00:28:48.691481538Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 10 00:28:48.692959 containerd[1551]: time="2025-07-10T00:28:48.691491098Z" level=info msg="NRI interface is disabled by configuration." Jul 10 00:28:48.692959 containerd[1551]: time="2025-07-10T00:28:48.691502138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 00:28:48.693132 containerd[1551]: time="2025-07-10T00:28:48.691843578Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 00:28:48.693132 containerd[1551]: time="2025-07-10T00:28:48.691901378Z" level=info msg="Connect containerd service" Jul 10 00:28:48.693132 containerd[1551]: time="2025-07-10T00:28:48.691998298Z" level=info msg="using legacy CRI server" Jul 10 00:28:48.693132 containerd[1551]: time="2025-07-10T00:28:48.692004978Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 10 00:28:48.693132 containerd[1551]: time="2025-07-10T00:28:48.692083178Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 00:28:48.693704 containerd[1551]: time="2025-07-10T00:28:48.693672498Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 00:28:48.694318 containerd[1551]: time="2025-07-10T00:28:48.694151298Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 00:28:48.694318 containerd[1551]: time="2025-07-10T00:28:48.694198098Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 00:28:48.694318 containerd[1551]: time="2025-07-10T00:28:48.694292098Z" level=info msg="Start subscribing containerd event" Jul 10 00:28:48.694411 containerd[1551]: time="2025-07-10T00:28:48.694327218Z" level=info msg="Start recovering state" Jul 10 00:28:48.694430 containerd[1551]: time="2025-07-10T00:28:48.694417738Z" level=info msg="Start event monitor" Jul 10 00:28:48.694448 containerd[1551]: time="2025-07-10T00:28:48.694432938Z" level=info msg="Start snapshots syncer" Jul 10 00:28:48.694448 containerd[1551]: time="2025-07-10T00:28:48.694443138Z" level=info msg="Start cni network conf syncer for default" Jul 10 00:28:48.694496 containerd[1551]: time="2025-07-10T00:28:48.694451298Z" level=info msg="Start streaming server" Jul 10 00:28:48.694831 containerd[1551]: time="2025-07-10T00:28:48.694561218Z" level=info msg="containerd successfully booted in 0.049030s" Jul 10 00:28:48.702741 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 10 00:28:48.705543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:28:48.709781 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 10 00:28:48.712304 systemd[1]: Started containerd.service - containerd container runtime. Jul 10 00:28:48.733964 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 10 00:28:48.734215 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 10 00:28:48.736380 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 10 00:28:48.743933 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 10 00:28:48.856235 tar[1541]: linux-arm64/LICENSE Jul 10 00:28:48.856235 tar[1541]: linux-arm64/README.md Jul 10 00:28:48.875752 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 10 00:28:49.281050 sshd_keygen[1539]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 00:28:49.301491 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 10 00:28:49.305028 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 10 00:28:49.307316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:28:49.311341 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:28:49.314898 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 00:28:49.315136 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 10 00:28:49.326634 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 10 00:28:49.335212 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 10 00:28:49.338112 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 10 00:28:49.340324 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 10 00:28:49.341716 systemd[1]: Reached target getty.target - Login Prompts. Jul 10 00:28:49.342656 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 10 00:28:49.343596 systemd[1]: Startup finished in 5.525s (kernel) + 3.165s (userspace) = 8.691s. Jul 10 00:28:49.788541 kubelet[1637]: E0710 00:28:49.788499 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:28:49.791191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:28:49.791428 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:28:53.908899 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 10 00:28:53.921583 systemd[1]: Started sshd@0-10.0.0.65:22-10.0.0.1:56136.service - OpenSSH per-connection server daemon (10.0.0.1:56136). Jul 10 00:28:53.973762 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 56136 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:53.975636 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:53.984629 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 10 00:28:53.994597 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 10 00:28:53.996442 systemd-logind[1524]: New session 1 of user core. Jul 10 00:28:54.004068 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 10 00:28:54.007098 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 10 00:28:54.013142 (systemd)[1667]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 00:28:54.087343 systemd[1667]: Queued start job for default target default.target. Jul 10 00:28:54.087771 systemd[1667]: Created slice app.slice - User Application Slice. Jul 10 00:28:54.087805 systemd[1667]: Reached target paths.target - Paths. Jul 10 00:28:54.087822 systemd[1667]: Reached target timers.target - Timers. Jul 10 00:28:54.102476 systemd[1667]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 10 00:28:54.108096 systemd[1667]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 10 00:28:54.108157 systemd[1667]: Reached target sockets.target - Sockets. Jul 10 00:28:54.108168 systemd[1667]: Reached target basic.target - Basic System. Jul 10 00:28:54.108203 systemd[1667]: Reached target default.target - Main User Target. Jul 10 00:28:54.108226 systemd[1667]: Startup finished in 90ms. Jul 10 00:28:54.108440 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 10 00:28:54.109763 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 10 00:28:54.168603 systemd[1]: Started sshd@1-10.0.0.65:22-10.0.0.1:56148.service - OpenSSH per-connection server daemon (10.0.0.1:56148). Jul 10 00:28:54.209697 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 56148 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:54.211298 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:54.215814 systemd-logind[1524]: New session 2 of user core. Jul 10 00:28:54.223600 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 10 00:28:54.274999 sshd[1679]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:54.286595 systemd[1]: Started sshd@2-10.0.0.65:22-10.0.0.1:56156.service - OpenSSH per-connection server daemon (10.0.0.1:56156). Jul 10 00:28:54.287075 systemd[1]: sshd@1-10.0.0.65:22-10.0.0.1:56148.service: Deactivated successfully. Jul 10 00:28:54.288408 systemd[1]: session-2.scope: Deactivated successfully. Jul 10 00:28:54.289089 systemd-logind[1524]: Session 2 logged out. Waiting for processes to exit. Jul 10 00:28:54.290413 systemd-logind[1524]: Removed session 2. Jul 10 00:28:54.318188 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 56156 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:54.319405 sshd[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:54.327532 systemd-logind[1524]: New session 3 of user core. Jul 10 00:28:54.346753 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 10 00:28:54.398809 sshd[1684]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:54.417634 systemd[1]: Started sshd@3-10.0.0.65:22-10.0.0.1:56158.service - OpenSSH per-connection server daemon (10.0.0.1:56158). Jul 10 00:28:54.418001 systemd[1]: sshd@2-10.0.0.65:22-10.0.0.1:56156.service: Deactivated successfully. Jul 10 00:28:54.421146 systemd[1]: session-3.scope: Deactivated successfully. Jul 10 00:28:54.421874 systemd-logind[1524]: Session 3 logged out. Waiting for processes to exit. Jul 10 00:28:54.423532 systemd-logind[1524]: Removed session 3. Jul 10 00:28:54.452149 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 56158 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:54.452573 sshd[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:54.456113 systemd-logind[1524]: New session 4 of user core. Jul 10 00:28:54.466725 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 10 00:28:54.519519 sshd[1692]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:54.528593 systemd[1]: Started sshd@4-10.0.0.65:22-10.0.0.1:56174.service - OpenSSH per-connection server daemon (10.0.0.1:56174). Jul 10 00:28:54.528964 systemd[1]: sshd@3-10.0.0.65:22-10.0.0.1:56158.service: Deactivated successfully. Jul 10 00:28:54.533622 systemd-logind[1524]: Session 4 logged out. Waiting for processes to exit. Jul 10 00:28:54.534328 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 00:28:54.536348 systemd-logind[1524]: Removed session 4. Jul 10 00:28:54.564129 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 56174 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:54.565256 sshd[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:54.569417 systemd-logind[1524]: New session 5 of user core. Jul 10 00:28:54.579617 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 10 00:28:54.643403 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 00:28:54.643674 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:28:54.659520 sudo[1707]: pam_unix(sudo:session): session closed for user root Jul 10 00:28:54.661207 sshd[1700]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:54.679939 systemd[1]: Started sshd@5-10.0.0.65:22-10.0.0.1:56184.service - OpenSSH per-connection server daemon (10.0.0.1:56184). Jul 10 00:28:54.680496 systemd[1]: sshd@4-10.0.0.65:22-10.0.0.1:56174.service: Deactivated successfully. Jul 10 00:28:54.681939 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 00:28:54.683132 systemd-logind[1524]: Session 5 logged out. Waiting for processes to exit. Jul 10 00:28:54.687312 systemd-logind[1524]: Removed session 5. Jul 10 00:28:54.711756 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 56184 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:54.713162 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:54.717434 systemd-logind[1524]: New session 6 of user core. Jul 10 00:28:54.729704 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 10 00:28:54.782124 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 00:28:54.782523 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:28:54.786453 sudo[1717]: pam_unix(sudo:session): session closed for user root Jul 10 00:28:54.790982 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 10 00:28:54.791250 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:28:54.807741 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 10 00:28:54.809712 auditctl[1720]: No rules Jul 10 00:28:54.810159 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 00:28:54.810427 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 10 00:28:54.814238 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 10 00:28:54.841543 augenrules[1739]: No rules Jul 10 00:28:54.842941 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 10 00:28:54.845582 sudo[1716]: pam_unix(sudo:session): session closed for user root Jul 10 00:28:54.847617 sshd[1710]: pam_unix(sshd:session): session closed for user core Jul 10 00:28:54.863636 systemd[1]: Started sshd@6-10.0.0.65:22-10.0.0.1:56200.service - OpenSSH per-connection server daemon (10.0.0.1:56200). Jul 10 00:28:54.864083 systemd[1]: sshd@5-10.0.0.65:22-10.0.0.1:56184.service: Deactivated successfully. Jul 10 00:28:54.865415 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 00:28:54.869999 systemd-logind[1524]: Session 6 logged out. Waiting for processes to exit. Jul 10 00:28:54.871381 systemd-logind[1524]: Removed session 6. Jul 10 00:28:54.899801 sshd[1745]: Accepted publickey for core from 10.0.0.1 port 56200 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:28:54.901123 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:28:54.905554 systemd-logind[1524]: New session 7 of user core. Jul 10 00:28:54.913687 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 10 00:28:54.965521 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 00:28:54.965823 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 10 00:28:55.302767 (dockerd)[1771]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 10 00:28:55.302852 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 10 00:28:55.553769 dockerd[1771]: time="2025-07-10T00:28:55.553651138Z" level=info msg="Starting up" Jul 10 00:28:55.802169 dockerd[1771]: time="2025-07-10T00:28:55.802122978Z" level=info msg="Loading containers: start." Jul 10 00:28:55.898406 kernel: Initializing XFRM netlink socket Jul 10 00:28:55.973161 systemd-networkd[1229]: docker0: Link UP Jul 10 00:28:55.996736 dockerd[1771]: time="2025-07-10T00:28:55.996680618Z" level=info msg="Loading containers: done." Jul 10 00:28:56.008764 dockerd[1771]: time="2025-07-10T00:28:56.008711858Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 00:28:56.008884 dockerd[1771]: time="2025-07-10T00:28:56.008824938Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 10 00:28:56.008979 dockerd[1771]: time="2025-07-10T00:28:56.008950778Z" level=info msg="Daemon has completed initialization" Jul 10 00:28:56.045824 dockerd[1771]: time="2025-07-10T00:28:56.045694018Z" level=info msg="API listen on /run/docker.sock" Jul 10 00:28:56.045920 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 10 00:28:56.778687 containerd[1551]: time="2025-07-10T00:28:56.778647858Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 00:28:57.378493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275230382.mount: Deactivated successfully. Jul 10 00:28:58.196519 containerd[1551]: time="2025-07-10T00:28:58.196431658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:58.197333 containerd[1551]: time="2025-07-10T00:28:58.197301538Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 10 00:28:58.197982 containerd[1551]: time="2025-07-10T00:28:58.197952538Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:58.200851 containerd[1551]: time="2025-07-10T00:28:58.200809298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:58.202005 containerd[1551]: time="2025-07-10T00:28:58.201960658Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.42326876s" Jul 10 00:28:58.202043 containerd[1551]: time="2025-07-10T00:28:58.202003018Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 10 00:28:58.204903 containerd[1551]: time="2025-07-10T00:28:58.204875538Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 00:28:59.176591 containerd[1551]: time="2025-07-10T00:28:59.176545778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:59.177694 containerd[1551]: time="2025-07-10T00:28:59.177649778Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 10 00:28:59.178578 containerd[1551]: time="2025-07-10T00:28:59.178551058Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:59.181372 containerd[1551]: time="2025-07-10T00:28:59.181327858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:28:59.182651 containerd[1551]: time="2025-07-10T00:28:59.182598698Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 977.6904ms" Jul 10 00:28:59.182651 containerd[1551]: time="2025-07-10T00:28:59.182632338Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 10 00:28:59.183352 containerd[1551]: time="2025-07-10T00:28:59.183167498Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 00:28:59.834008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 00:28:59.839520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:28:59.962108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:28:59.965649 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 10 00:29:00.007737 kubelet[1995]: E0710 00:29:00.007690 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 00:29:00.011423 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 00:29:00.011607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 00:29:00.203322 containerd[1551]: time="2025-07-10T00:29:00.203040458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:00.204226 containerd[1551]: time="2025-07-10T00:29:00.204130738Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 10 00:29:00.204781 containerd[1551]: time="2025-07-10T00:29:00.204744378Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:00.207954 containerd[1551]: time="2025-07-10T00:29:00.207897298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:00.209057 containerd[1551]: time="2025-07-10T00:29:00.209029018Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.02583064s" Jul 10 00:29:00.209105 containerd[1551]: time="2025-07-10T00:29:00.209063378Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 10 00:29:00.209507 containerd[1551]: time="2025-07-10T00:29:00.209484618Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 00:29:01.172897 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount891897497.mount: Deactivated successfully. Jul 10 00:29:01.489787 containerd[1551]: time="2025-07-10T00:29:01.489669578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:01.490688 containerd[1551]: time="2025-07-10T00:29:01.490483418Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 10 00:29:01.492402 containerd[1551]: time="2025-07-10T00:29:01.491420658Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:01.493914 containerd[1551]: time="2025-07-10T00:29:01.493879058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:01.494663 containerd[1551]: time="2025-07-10T00:29:01.494608178Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.2850918s" Jul 10 00:29:01.494663 containerd[1551]: time="2025-07-10T00:29:01.494646178Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 10 00:29:01.495302 containerd[1551]: time="2025-07-10T00:29:01.495067058Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 00:29:02.014524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount373049894.mount: Deactivated successfully. Jul 10 00:29:02.755620 containerd[1551]: time="2025-07-10T00:29:02.755576058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:02.757797 containerd[1551]: time="2025-07-10T00:29:02.757751938Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 10 00:29:02.758612 containerd[1551]: time="2025-07-10T00:29:02.758584698Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:02.761612 containerd[1551]: time="2025-07-10T00:29:02.761576378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:02.763630 containerd[1551]: time="2025-07-10T00:29:02.762794018Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.26769372s" Jul 10 00:29:02.763630 containerd[1551]: time="2025-07-10T00:29:02.762829578Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 10 00:29:02.763630 containerd[1551]: time="2025-07-10T00:29:02.763302778Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 00:29:03.286229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1388013433.mount: Deactivated successfully. Jul 10 00:29:03.291420 containerd[1551]: time="2025-07-10T00:29:03.291379738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:03.292153 containerd[1551]: time="2025-07-10T00:29:03.291943858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 10 00:29:03.293010 containerd[1551]: time="2025-07-10T00:29:03.292965538Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:03.295139 containerd[1551]: time="2025-07-10T00:29:03.295109258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:03.296137 containerd[1551]: time="2025-07-10T00:29:03.296101538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 532.7678ms" Jul 10 00:29:03.296204 containerd[1551]: time="2025-07-10T00:29:03.296135938Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 10 00:29:03.296680 containerd[1551]: time="2025-07-10T00:29:03.296661578Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 00:29:03.806704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825099183.mount: Deactivated successfully. Jul 10 00:29:05.093872 containerd[1551]: time="2025-07-10T00:29:05.093817858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:05.094427 containerd[1551]: time="2025-07-10T00:29:05.094387778Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 10 00:29:05.095363 containerd[1551]: time="2025-07-10T00:29:05.095327178Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:05.101442 containerd[1551]: time="2025-07-10T00:29:05.101406138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:05.102728 containerd[1551]: time="2025-07-10T00:29:05.102644858Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.80595436s" Jul 10 00:29:05.102728 containerd[1551]: time="2025-07-10T00:29:05.102680458Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 10 00:29:10.081481 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 00:29:10.093588 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:29:10.105118 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:29:10.105231 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:29:10.105561 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:10.123619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:29:10.147785 systemd[1]: Reloading requested from client PID 2160 ('systemctl') (unit session-7.scope)... Jul 10 00:29:10.147803 systemd[1]: Reloading... Jul 10 00:29:10.209395 zram_generator::config[2198]: No configuration found. Jul 10 00:29:10.301902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:29:10.353887 systemd[1]: Reloading finished in 205 ms. Jul 10 00:29:10.392876 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 00:29:10.392939 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 00:29:10.393201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:10.395004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:29:10.493194 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:10.499156 (kubelet)[2256]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:29:10.533598 kubelet[2256]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:29:10.533598 kubelet[2256]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:29:10.533598 kubelet[2256]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:29:10.534010 kubelet[2256]: I0710 00:29:10.533646 2256 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:29:11.794813 kubelet[2256]: I0710 00:29:11.794763 2256 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:29:11.794813 kubelet[2256]: I0710 00:29:11.794799 2256 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:29:11.795161 kubelet[2256]: I0710 00:29:11.795036 2256 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:29:11.832828 kubelet[2256]: E0710 00:29:11.832781 2256 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:11.833822 kubelet[2256]: I0710 00:29:11.833780 2256 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:29:11.845844 kubelet[2256]: E0710 00:29:11.845795 2256 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:29:11.845844 kubelet[2256]: I0710 00:29:11.845832 2256 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:29:11.850703 kubelet[2256]: I0710 00:29:11.850672 2256 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:29:11.853007 kubelet[2256]: I0710 00:29:11.852979 2256 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:29:11.853144 kubelet[2256]: I0710 00:29:11.853105 2256 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:29:11.853338 kubelet[2256]: I0710 00:29:11.853135 2256 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:29:11.853338 kubelet[2256]: I0710 00:29:11.853337 2256 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:29:11.853452 kubelet[2256]: I0710 00:29:11.853346 2256 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:29:11.853637 kubelet[2256]: I0710 00:29:11.853607 2256 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:29:11.855647 kubelet[2256]: I0710 00:29:11.855612 2256 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:29:11.855647 kubelet[2256]: I0710 00:29:11.855646 2256 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:29:11.855713 kubelet[2256]: I0710 00:29:11.855676 2256 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:29:11.856743 kubelet[2256]: I0710 00:29:11.855761 2256 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:29:11.867765 kubelet[2256]: W0710 00:29:11.867652 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 10 00:29:11.867765 kubelet[2256]: E0710 00:29:11.867719 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:11.867765 kubelet[2256]: W0710 00:29:11.867651 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 10 00:29:11.867765 kubelet[2256]: E0710 00:29:11.867757 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:11.870088 kubelet[2256]: I0710 00:29:11.870061 2256 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:29:11.870771 kubelet[2256]: I0710 00:29:11.870754 2256 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:29:11.870942 kubelet[2256]: W0710 00:29:11.870930 2256 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 00:29:11.871916 kubelet[2256]: I0710 00:29:11.871894 2256 server.go:1274] "Started kubelet" Jul 10 00:29:11.872941 kubelet[2256]: I0710 00:29:11.872578 2256 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:29:11.872941 kubelet[2256]: I0710 00:29:11.872884 2256 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:29:11.872941 kubelet[2256]: I0710 00:29:11.872934 2256 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:29:11.873409 kubelet[2256]: I0710 00:29:11.873176 2256 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:29:11.873794 kubelet[2256]: I0710 00:29:11.873767 2256 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:29:11.874157 kubelet[2256]: I0710 00:29:11.874115 2256 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:29:11.874687 kubelet[2256]: I0710 00:29:11.874656 2256 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:29:11.874766 kubelet[2256]: I0710 00:29:11.874760 2256 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:29:11.874950 kubelet[2256]: I0710 00:29:11.874807 2256 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:29:11.875365 kubelet[2256]: W0710 00:29:11.875099 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 10 00:29:11.875365 kubelet[2256]: E0710 00:29:11.875140 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:11.876520 kubelet[2256]: I0710 00:29:11.876496 2256 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:29:11.877108 kubelet[2256]: E0710 00:29:11.875940 2256 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.65:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.65:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bc5a7cac3d9a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 00:29:11.871864218 +0000 UTC m=+1.369240521,LastTimestamp:2025-07-10 00:29:11.871864218 +0000 UTC m=+1.369240521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 00:29:11.877648 kubelet[2256]: E0710 00:29:11.877570 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="200ms" Jul 10 00:29:11.877648 kubelet[2256]: E0710 00:29:11.877590 2256 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:29:11.878399 kubelet[2256]: I0710 00:29:11.878381 2256 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:29:11.878491 kubelet[2256]: I0710 00:29:11.878481 2256 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:29:11.878556 kubelet[2256]: E0710 00:29:11.878401 2256 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:29:11.889828 kubelet[2256]: I0710 00:29:11.889785 2256 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:29:11.891380 kubelet[2256]: I0710 00:29:11.891248 2256 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:29:11.891380 kubelet[2256]: I0710 00:29:11.891274 2256 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:29:11.891380 kubelet[2256]: I0710 00:29:11.891296 2256 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:29:11.891380 kubelet[2256]: E0710 00:29:11.891343 2256 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:29:11.892245 kubelet[2256]: W0710 00:29:11.892180 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 10 00:29:11.892311 kubelet[2256]: E0710 00:29:11.892247 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:11.898663 kubelet[2256]: I0710 00:29:11.898637 2256 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:29:11.898663 kubelet[2256]: I0710 00:29:11.898658 2256 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:29:11.898789 kubelet[2256]: I0710 00:29:11.898679 2256 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:29:11.978391 kubelet[2256]: E0710 00:29:11.978343 2256 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:29:11.991476 kubelet[2256]: E0710 00:29:11.991434 2256 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 10 00:29:12.078572 kubelet[2256]: E0710 00:29:12.078462 2256 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 00:29:12.078725 kubelet[2256]: E0710 00:29:12.078673 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="400ms" Jul 10 00:29:12.100052 kubelet[2256]: I0710 00:29:12.100017 2256 policy_none.go:49] "None policy: Start" Jul 10 00:29:12.100711 kubelet[2256]: I0710 00:29:12.100687 2256 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:29:12.100763 kubelet[2256]: I0710 00:29:12.100746 2256 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:29:12.106568 kubelet[2256]: I0710 00:29:12.105880 2256 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:29:12.106568 kubelet[2256]: I0710 00:29:12.106067 2256 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:29:12.106568 kubelet[2256]: I0710 00:29:12.106083 2256 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:29:12.106568 kubelet[2256]: I0710 00:29:12.106494 2256 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:29:12.107825 kubelet[2256]: E0710 00:29:12.107808 2256 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 00:29:12.207858 kubelet[2256]: I0710 00:29:12.207818 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:12.208343 kubelet[2256]: E0710 00:29:12.208309 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 10 00:29:12.276786 kubelet[2256]: I0710 00:29:12.276557 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4eee05c5da898962677bbffcc59d0658-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4eee05c5da898962677bbffcc59d0658\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:12.276786 kubelet[2256]: I0710 00:29:12.276590 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:12.276786 kubelet[2256]: I0710 00:29:12.276608 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:12.276786 kubelet[2256]: I0710 00:29:12.276626 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:12.276786 kubelet[2256]: I0710 00:29:12.276643 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:12.277011 kubelet[2256]: I0710 00:29:12.276659 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:29:12.277011 kubelet[2256]: I0710 00:29:12.276674 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4eee05c5da898962677bbffcc59d0658-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4eee05c5da898962677bbffcc59d0658\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:12.277011 kubelet[2256]: I0710 00:29:12.276689 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4eee05c5da898962677bbffcc59d0658-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4eee05c5da898962677bbffcc59d0658\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:12.277011 kubelet[2256]: I0710 00:29:12.276705 2256 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:12.410560 kubelet[2256]: I0710 00:29:12.410523 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:12.410970 kubelet[2256]: E0710 00:29:12.410928 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 10 00:29:12.479680 kubelet[2256]: E0710 00:29:12.479639 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="800ms" Jul 10 00:29:12.498140 kubelet[2256]: E0710 00:29:12.497862 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:12.498140 kubelet[2256]: E0710 00:29:12.497889 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:12.498553 containerd[1551]: time="2025-07-10T00:29:12.498518418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4eee05c5da898962677bbffcc59d0658,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:12.498882 containerd[1551]: time="2025-07-10T00:29:12.498582378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:12.498909 kubelet[2256]: E0710 00:29:12.498878 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:12.499326 containerd[1551]: time="2025-07-10T00:29:12.499200058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:12.782058 kubelet[2256]: W0710 00:29:12.781917 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 10 00:29:12.782058 kubelet[2256]: E0710 00:29:12.782000 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:12.812468 kubelet[2256]: I0710 00:29:12.812422 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:12.812785 kubelet[2256]: E0710 00:29:12.812763 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 10 00:29:13.018920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1613136543.mount: Deactivated successfully. Jul 10 00:29:13.025441 containerd[1551]: time="2025-07-10T00:29:13.024922178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:29:13.027336 containerd[1551]: time="2025-07-10T00:29:13.027310858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 10 00:29:13.029402 containerd[1551]: time="2025-07-10T00:29:13.027857498Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:29:13.029402 containerd[1551]: time="2025-07-10T00:29:13.028606138Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:29:13.029402 containerd[1551]: time="2025-07-10T00:29:13.029400538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:29:13.029682 containerd[1551]: time="2025-07-10T00:29:13.029656178Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:29:13.030311 containerd[1551]: time="2025-07-10T00:29:13.030280138Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 10 00:29:13.034194 containerd[1551]: time="2025-07-10T00:29:13.033903058Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 534.64732ms" Jul 10 00:29:13.035520 containerd[1551]: time="2025-07-10T00:29:13.035476378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.82536ms" Jul 10 00:29:13.035994 containerd[1551]: time="2025-07-10T00:29:13.035960578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 10 00:29:13.038037 containerd[1551]: time="2025-07-10T00:29:13.037993258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.39808ms" Jul 10 00:29:13.197435 containerd[1551]: time="2025-07-10T00:29:13.197348178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:13.197435 containerd[1551]: time="2025-07-10T00:29:13.197423898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:13.197587 containerd[1551]: time="2025-07-10T00:29:13.197451978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.197587 containerd[1551]: time="2025-07-10T00:29:13.197537218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.197964 containerd[1551]: time="2025-07-10T00:29:13.197724698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:13.197964 containerd[1551]: time="2025-07-10T00:29:13.197799858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:13.197964 containerd[1551]: time="2025-07-10T00:29:13.197827658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.197964 containerd[1551]: time="2025-07-10T00:29:13.197922218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.199338 containerd[1551]: time="2025-07-10T00:29:13.198512258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:13.199338 containerd[1551]: time="2025-07-10T00:29:13.198566898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:13.199338 containerd[1551]: time="2025-07-10T00:29:13.198582658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.199338 containerd[1551]: time="2025-07-10T00:29:13.198655098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:13.247261 containerd[1551]: time="2025-07-10T00:29:13.245747538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e0e58462d44b9b87e6d3949387a291a4dc1efcbf18db806773d049fc083fe94\"" Jul 10 00:29:13.248018 kubelet[2256]: E0710 00:29:13.247919 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.249186 containerd[1551]: time="2025-07-10T00:29:13.249156778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e678274f2ef7e4843f92a850c54ea4f2691e362ed0c801dfa34c0ef6ff999d7\"" Jul 10 00:29:13.250002 kubelet[2256]: E0710 00:29:13.249912 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.250316 kubelet[2256]: W0710 00:29:13.250206 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 10 00:29:13.250316 kubelet[2256]: E0710 00:29:13.250241 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:13.251377 containerd[1551]: time="2025-07-10T00:29:13.251199418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4eee05c5da898962677bbffcc59d0658,Namespace:kube-system,Attempt:0,} returns sandbox id \"33dac7f843b640ee54117f9e8bbc91394646a63ed17d0014bb7e9c594db02f2d\"" Jul 10 00:29:13.252226 containerd[1551]: time="2025-07-10T00:29:13.252011658Z" level=info msg="CreateContainer within sandbox \"8e678274f2ef7e4843f92a850c54ea4f2691e362ed0c801dfa34c0ef6ff999d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 00:29:13.252226 containerd[1551]: time="2025-07-10T00:29:13.252202338Z" level=info msg="CreateContainer within sandbox \"8e0e58462d44b9b87e6d3949387a291a4dc1efcbf18db806773d049fc083fe94\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 00:29:13.252316 kubelet[2256]: E0710 00:29:13.252237 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.253984 containerd[1551]: time="2025-07-10T00:29:13.253677098Z" level=info msg="CreateContainer within sandbox \"33dac7f843b640ee54117f9e8bbc91394646a63ed17d0014bb7e9c594db02f2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 00:29:13.268701 containerd[1551]: time="2025-07-10T00:29:13.268621178Z" level=info msg="CreateContainer within sandbox \"8e0e58462d44b9b87e6d3949387a291a4dc1efcbf18db806773d049fc083fe94\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"535fdef204285b7eef26678299600a44ae214dedb3b008e2137507d281b8b153\"" Jul 10 00:29:13.269652 containerd[1551]: time="2025-07-10T00:29:13.269620498Z" level=info msg="StartContainer for \"535fdef204285b7eef26678299600a44ae214dedb3b008e2137507d281b8b153\"" Jul 10 00:29:13.274522 containerd[1551]: time="2025-07-10T00:29:13.274483738Z" level=info msg="CreateContainer within sandbox \"8e678274f2ef7e4843f92a850c54ea4f2691e362ed0c801dfa34c0ef6ff999d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a1b6b4a172ba21d2494885b5ba2736e75a8db9afaaa9570372845d97ae10620\"" Jul 10 00:29:13.275084 containerd[1551]: time="2025-07-10T00:29:13.275059458Z" level=info msg="StartContainer for \"8a1b6b4a172ba21d2494885b5ba2736e75a8db9afaaa9570372845d97ae10620\"" Jul 10 00:29:13.275436 containerd[1551]: time="2025-07-10T00:29:13.275328418Z" level=info msg="CreateContainer within sandbox \"33dac7f843b640ee54117f9e8bbc91394646a63ed17d0014bb7e9c594db02f2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de71f3d00203ca1294feea0308d13292cb21a4e1a417a1cef23c76f8944b4edf\"" Jul 10 00:29:13.276500 containerd[1551]: time="2025-07-10T00:29:13.276423218Z" level=info msg="StartContainer for \"de71f3d00203ca1294feea0308d13292cb21a4e1a417a1cef23c76f8944b4edf\"" Jul 10 00:29:13.281196 kubelet[2256]: E0710 00:29:13.281155 2256 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.65:6443: connect: connection refused" interval="1.6s" Jul 10 00:29:13.341876 containerd[1551]: time="2025-07-10T00:29:13.340693978Z" level=info msg="StartContainer for \"535fdef204285b7eef26678299600a44ae214dedb3b008e2137507d281b8b153\" returns successfully" Jul 10 00:29:13.346002 containerd[1551]: time="2025-07-10T00:29:13.345111978Z" level=info msg="StartContainer for \"de71f3d00203ca1294feea0308d13292cb21a4e1a417a1cef23c76f8944b4edf\" returns successfully" Jul 10 00:29:13.364222 containerd[1551]: time="2025-07-10T00:29:13.364101938Z" level=info msg="StartContainer for \"8a1b6b4a172ba21d2494885b5ba2736e75a8db9afaaa9570372845d97ae10620\" returns successfully" Jul 10 00:29:13.460174 kubelet[2256]: W0710 00:29:13.454515 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 10 00:29:13.460174 kubelet[2256]: E0710 00:29:13.454590 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.65:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:13.460174 kubelet[2256]: W0710 00:29:13.457998 2256 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.65:6443: connect: connection refused Jul 10 00:29:13.460174 kubelet[2256]: E0710 00:29:13.458036 2256 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.65:6443: connect: connection refused" logger="UnhandledError" Jul 10 00:29:13.614951 kubelet[2256]: I0710 00:29:13.614844 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:13.615276 kubelet[2256]: E0710 00:29:13.615167 2256 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.65:6443/api/v1/nodes\": dial tcp 10.0.0.65:6443: connect: connection refused" node="localhost" Jul 10 00:29:13.901427 kubelet[2256]: E0710 00:29:13.898748 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.902110 kubelet[2256]: E0710 00:29:13.902087 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:13.903391 kubelet[2256]: E0710 00:29:13.903338 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:14.909274 kubelet[2256]: E0710 00:29:14.909208 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:14.909630 kubelet[2256]: E0710 00:29:14.909343 2256 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:15.217308 kubelet[2256]: I0710 00:29:15.217196 2256 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:16.489059 kubelet[2256]: E0710 00:29:16.489017 2256 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 00:29:16.609006 kubelet[2256]: I0710 00:29:16.608965 2256 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:29:16.869982 kubelet[2256]: I0710 00:29:16.869941 2256 apiserver.go:52] "Watching apiserver" Jul 10 00:29:16.875792 kubelet[2256]: I0710 00:29:16.875739 2256 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:29:18.486286 systemd[1]: Reloading requested from client PID 2535 ('systemctl') (unit session-7.scope)... Jul 10 00:29:18.486304 systemd[1]: Reloading... Jul 10 00:29:18.545397 zram_generator::config[2577]: No configuration found. Jul 10 00:29:18.629796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 00:29:18.688308 systemd[1]: Reloading finished in 201 ms. Jul 10 00:29:18.714849 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:29:18.730408 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 00:29:18.730727 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:18.740573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 10 00:29:18.836823 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 10 00:29:18.840239 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 10 00:29:18.877232 kubelet[2626]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:29:18.877232 kubelet[2626]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 00:29:18.877232 kubelet[2626]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 00:29:18.878891 kubelet[2626]: I0710 00:29:18.877266 2626 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 00:29:18.884313 kubelet[2626]: I0710 00:29:18.882686 2626 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 00:29:18.884313 kubelet[2626]: I0710 00:29:18.882720 2626 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 00:29:18.884313 kubelet[2626]: I0710 00:29:18.882946 2626 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 00:29:18.884698 kubelet[2626]: I0710 00:29:18.884547 2626 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 00:29:18.886734 kubelet[2626]: I0710 00:29:18.886692 2626 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 00:29:18.893676 kubelet[2626]: E0710 00:29:18.893388 2626 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 00:29:18.893676 kubelet[2626]: I0710 00:29:18.893428 2626 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 00:29:18.896188 kubelet[2626]: I0710 00:29:18.896165 2626 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 00:29:18.896685 kubelet[2626]: I0710 00:29:18.896668 2626 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 00:29:18.896908 kubelet[2626]: I0710 00:29:18.896877 2626 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 00:29:18.897127 kubelet[2626]: I0710 00:29:18.896957 2626 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 00:29:18.897240 kubelet[2626]: I0710 00:29:18.897228 2626 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 00:29:18.897287 kubelet[2626]: I0710 00:29:18.897280 2626 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 00:29:18.897389 kubelet[2626]: I0710 00:29:18.897353 2626 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:29:18.897563 kubelet[2626]: I0710 00:29:18.897548 2626 kubelet.go:408] "Attempting to sync node with API server" Jul 10 00:29:18.897630 kubelet[2626]: I0710 00:29:18.897621 2626 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 00:29:18.899375 kubelet[2626]: I0710 00:29:18.897677 2626 kubelet.go:314] "Adding apiserver pod source" Jul 10 00:29:18.899375 kubelet[2626]: I0710 00:29:18.897695 2626 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 00:29:18.899375 kubelet[2626]: I0710 00:29:18.898527 2626 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 10 00:29:18.900136 kubelet[2626]: I0710 00:29:18.900112 2626 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 00:29:18.901421 kubelet[2626]: I0710 00:29:18.901393 2626 server.go:1274] "Started kubelet" Jul 10 00:29:18.902849 kubelet[2626]: I0710 00:29:18.902788 2626 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 00:29:18.903105 kubelet[2626]: I0710 00:29:18.903072 2626 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 00:29:18.903770 kubelet[2626]: I0710 00:29:18.903735 2626 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 00:29:18.904186 kubelet[2626]: I0710 00:29:18.904166 2626 server.go:449] "Adding debug handlers to kubelet server" Jul 10 00:29:18.908984 kubelet[2626]: I0710 00:29:18.908966 2626 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 00:29:18.909177 kubelet[2626]: I0710 00:29:18.909157 2626 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 00:29:18.916613 kubelet[2626]: I0710 00:29:18.916579 2626 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 00:29:18.918505 kubelet[2626]: I0710 00:29:18.918480 2626 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 00:29:18.918784 kubelet[2626]: I0710 00:29:18.918772 2626 reconciler.go:26] "Reconciler: start to sync state" Jul 10 00:29:18.919304 kubelet[2626]: I0710 00:29:18.919275 2626 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 00:29:18.919597 kubelet[2626]: E0710 00:29:18.919577 2626 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 10 00:29:18.919871 kubelet[2626]: I0710 00:29:18.919745 2626 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 00:29:18.921435 kubelet[2626]: I0710 00:29:18.921418 2626 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 00:29:18.921511 kubelet[2626]: I0710 00:29:18.921502 2626 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 00:29:18.921564 kubelet[2626]: I0710 00:29:18.921557 2626 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 00:29:18.921675 kubelet[2626]: E0710 00:29:18.921659 2626 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 00:29:18.924658 kubelet[2626]: I0710 00:29:18.924632 2626 factory.go:221] Registration of the containerd container factory successfully Jul 10 00:29:18.926068 kubelet[2626]: I0710 00:29:18.926053 2626 factory.go:221] Registration of the systemd container factory successfully Jul 10 00:29:18.961789 kubelet[2626]: I0710 00:29:18.961764 2626 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 00:29:18.961963 kubelet[2626]: I0710 00:29:18.961947 2626 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 00:29:18.962023 kubelet[2626]: I0710 00:29:18.962015 2626 state_mem.go:36] "Initialized new in-memory state store" Jul 10 00:29:18.962245 kubelet[2626]: I0710 00:29:18.962227 2626 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 00:29:18.962324 kubelet[2626]: I0710 00:29:18.962301 2626 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 00:29:18.962427 kubelet[2626]: I0710 00:29:18.962415 2626 policy_none.go:49] "None policy: Start" Jul 10 00:29:18.963090 kubelet[2626]: I0710 00:29:18.963075 2626 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 00:29:18.963166 kubelet[2626]: I0710 00:29:18.963157 2626 state_mem.go:35] "Initializing new in-memory state store" Jul 10 00:29:18.963350 kubelet[2626]: I0710 00:29:18.963336 2626 state_mem.go:75] "Updated machine memory state" Jul 10 00:29:18.964574 kubelet[2626]: I0710 00:29:18.964552 2626 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 00:29:18.964837 kubelet[2626]: I0710 00:29:18.964820 2626 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 00:29:18.964921 kubelet[2626]: I0710 00:29:18.964893 2626 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 00:29:18.965281 kubelet[2626]: I0710 00:29:18.965263 2626 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 00:29:19.073250 kubelet[2626]: I0710 00:29:19.073148 2626 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 00:29:19.080443 kubelet[2626]: I0710 00:29:19.080346 2626 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 00:29:19.080612 kubelet[2626]: I0710 00:29:19.080600 2626 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 00:29:19.119495 kubelet[2626]: I0710 00:29:19.119456 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4eee05c5da898962677bbffcc59d0658-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4eee05c5da898962677bbffcc59d0658\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:19.119495 kubelet[2626]: I0710 00:29:19.119494 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4eee05c5da898962677bbffcc59d0658-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4eee05c5da898962677bbffcc59d0658\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:19.119690 kubelet[2626]: I0710 00:29:19.119516 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:19.119690 kubelet[2626]: I0710 00:29:19.119533 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:19.119690 kubelet[2626]: I0710 00:29:19.119548 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:19.119690 kubelet[2626]: I0710 00:29:19.119562 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 00:29:19.119690 kubelet[2626]: I0710 00:29:19.119575 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:19.119801 kubelet[2626]: I0710 00:29:19.119589 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 00:29:19.119801 kubelet[2626]: I0710 00:29:19.119603 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4eee05c5da898962677bbffcc59d0658-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4eee05c5da898962677bbffcc59d0658\") " pod="kube-system/kube-apiserver-localhost" Jul 10 00:29:19.333003 kubelet[2626]: E0710 00:29:19.332741 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:19.333003 kubelet[2626]: E0710 00:29:19.332749 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:19.333003 kubelet[2626]: E0710 00:29:19.332819 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:19.899204 kubelet[2626]: I0710 00:29:19.899153 2626 apiserver.go:52] "Watching apiserver" Jul 10 00:29:19.919506 kubelet[2626]: I0710 00:29:19.919461 2626 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 00:29:19.938235 kubelet[2626]: E0710 00:29:19.938199 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:19.938560 kubelet[2626]: E0710 00:29:19.938477 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:19.939843 kubelet[2626]: E0710 00:29:19.939657 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:19.963071 kubelet[2626]: I0710 00:29:19.963003 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.962986738 podStartE2EDuration="962.986738ms" podCreationTimestamp="2025-07-10 00:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:29:19.956267098 +0000 UTC m=+1.112876601" watchObservedRunningTime="2025-07-10 00:29:19.962986738 +0000 UTC m=+1.119596161" Jul 10 00:29:19.963208 kubelet[2626]: I0710 00:29:19.963121 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.963115498 podStartE2EDuration="963.115498ms" podCreationTimestamp="2025-07-10 00:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:29:19.963103898 +0000 UTC m=+1.119713321" watchObservedRunningTime="2025-07-10 00:29:19.963115498 +0000 UTC m=+1.119724921" Jul 10 00:29:19.981510 kubelet[2626]: I0710 00:29:19.981320 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.981300898 podStartE2EDuration="981.300898ms" podCreationTimestamp="2025-07-10 00:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:29:19.970578338 +0000 UTC m=+1.127187801" watchObservedRunningTime="2025-07-10 00:29:19.981300898 +0000 UTC m=+1.137910281" Jul 10 00:29:20.939824 kubelet[2626]: E0710 00:29:20.939774 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:20.940550 kubelet[2626]: E0710 00:29:20.940516 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:23.892374 kubelet[2626]: I0710 00:29:23.892317 2626 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 00:29:23.892792 containerd[1551]: time="2025-07-10T00:29:23.892666303Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 00:29:23.893695 kubelet[2626]: I0710 00:29:23.893133 2626 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 00:29:24.847776 kubelet[2626]: E0710 00:29:24.847514 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:24.945328 kubelet[2626]: E0710 00:29:24.945288 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:24.954580 kubelet[2626]: I0710 00:29:24.954121 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ac376598-a8aa-485a-9669-a81709e7682f-kube-proxy\") pod \"kube-proxy-s72nw\" (UID: \"ac376598-a8aa-485a-9669-a81709e7682f\") " pod="kube-system/kube-proxy-s72nw" Jul 10 00:29:24.954580 kubelet[2626]: I0710 00:29:24.954165 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ac376598-a8aa-485a-9669-a81709e7682f-xtables-lock\") pod \"kube-proxy-s72nw\" (UID: \"ac376598-a8aa-485a-9669-a81709e7682f\") " pod="kube-system/kube-proxy-s72nw" Jul 10 00:29:24.954580 kubelet[2626]: I0710 00:29:24.954188 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac376598-a8aa-485a-9669-a81709e7682f-lib-modules\") pod \"kube-proxy-s72nw\" (UID: \"ac376598-a8aa-485a-9669-a81709e7682f\") " pod="kube-system/kube-proxy-s72nw" Jul 10 00:29:24.954580 kubelet[2626]: I0710 00:29:24.954204 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q9n9\" (UniqueName: \"kubernetes.io/projected/ac376598-a8aa-485a-9669-a81709e7682f-kube-api-access-4q9n9\") pod \"kube-proxy-s72nw\" (UID: \"ac376598-a8aa-485a-9669-a81709e7682f\") " pod="kube-system/kube-proxy-s72nw" Jul 10 00:29:25.155567 kubelet[2626]: I0710 00:29:25.155457 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9zc4\" (UniqueName: \"kubernetes.io/projected/824018c1-a129-4f73-bf15-4ea0ed2faaaa-kube-api-access-t9zc4\") pod \"tigera-operator-5bf8dfcb4-mg7fj\" (UID: \"824018c1-a129-4f73-bf15-4ea0ed2faaaa\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-mg7fj" Jul 10 00:29:25.155567 kubelet[2626]: I0710 00:29:25.155498 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/824018c1-a129-4f73-bf15-4ea0ed2faaaa-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-mg7fj\" (UID: \"824018c1-a129-4f73-bf15-4ea0ed2faaaa\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-mg7fj" Jul 10 00:29:25.237324 kubelet[2626]: E0710 00:29:25.237242 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:25.238208 containerd[1551]: time="2025-07-10T00:29:25.237854373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s72nw,Uid:ac376598-a8aa-485a-9669-a81709e7682f,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:25.258892 containerd[1551]: time="2025-07-10T00:29:25.258771369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:25.258892 containerd[1551]: time="2025-07-10T00:29:25.258845808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:25.259067 containerd[1551]: time="2025-07-10T00:29:25.258878768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:25.259527 containerd[1551]: time="2025-07-10T00:29:25.259474960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:25.273216 systemd[1]: run-containerd-runc-k8s.io-decd8a6261a285ca6d301841b82a4fc385d2bca4e92f70439ff3c924acb3f04c-runc.qVSZ0w.mount: Deactivated successfully. Jul 10 00:29:25.289057 containerd[1551]: time="2025-07-10T00:29:25.289021559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s72nw,Uid:ac376598-a8aa-485a-9669-a81709e7682f,Namespace:kube-system,Attempt:0,} returns sandbox id \"decd8a6261a285ca6d301841b82a4fc385d2bca4e92f70439ff3c924acb3f04c\"" Jul 10 00:29:25.289831 kubelet[2626]: E0710 00:29:25.289811 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:25.291592 containerd[1551]: time="2025-07-10T00:29:25.291564604Z" level=info msg="CreateContainer within sandbox \"decd8a6261a285ca6d301841b82a4fc385d2bca4e92f70439ff3c924acb3f04c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 00:29:25.303798 containerd[1551]: time="2025-07-10T00:29:25.303754439Z" level=info msg="CreateContainer within sandbox \"decd8a6261a285ca6d301841b82a4fc385d2bca4e92f70439ff3c924acb3f04c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b227f35f90fd61b56d074651f2fda8fd0d8af97601f492dfbdcc81c57dc21e0d\"" Jul 10 00:29:25.304339 containerd[1551]: time="2025-07-10T00:29:25.304310591Z" level=info msg="StartContainer for \"b227f35f90fd61b56d074651f2fda8fd0d8af97601f492dfbdcc81c57dc21e0d\"" Jul 10 00:29:25.360020 containerd[1551]: time="2025-07-10T00:29:25.359977436Z" level=info msg="StartContainer for \"b227f35f90fd61b56d074651f2fda8fd0d8af97601f492dfbdcc81c57dc21e0d\" returns successfully" Jul 10 00:29:25.377944 containerd[1551]: time="2025-07-10T00:29:25.377892953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-mg7fj,Uid:824018c1-a129-4f73-bf15-4ea0ed2faaaa,Namespace:tigera-operator,Attempt:0,}" Jul 10 00:29:25.395993 containerd[1551]: time="2025-07-10T00:29:25.395879349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:25.396660 containerd[1551]: time="2025-07-10T00:29:25.396425661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:25.396660 containerd[1551]: time="2025-07-10T00:29:25.396491540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:25.396860 containerd[1551]: time="2025-07-10T00:29:25.396795496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:25.444307 containerd[1551]: time="2025-07-10T00:29:25.444195893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-mg7fj,Uid:824018c1-a129-4f73-bf15-4ea0ed2faaaa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3d8e8d7441b97e38978084f68f1eb2449eaca68ab964e8682ba86bdb57968e83\"" Jul 10 00:29:25.447100 containerd[1551]: time="2025-07-10T00:29:25.447007695Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 00:29:25.948869 kubelet[2626]: E0710 00:29:25.948830 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:26.661182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639151105.mount: Deactivated successfully. Jul 10 00:29:26.934718 kubelet[2626]: E0710 00:29:26.934465 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:26.953207 kubelet[2626]: E0710 00:29:26.950767 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:26.953582 kubelet[2626]: I0710 00:29:26.952285 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s72nw" podStartSLOduration=2.952271709 podStartE2EDuration="2.952271709s" podCreationTimestamp="2025-07-10 00:29:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:29:25.959241623 +0000 UTC m=+7.115851046" watchObservedRunningTime="2025-07-10 00:29:26.952271709 +0000 UTC m=+8.108881092" Jul 10 00:29:27.192102 containerd[1551]: time="2025-07-10T00:29:27.191986451Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:27.192561 containerd[1551]: time="2025-07-10T00:29:27.192523885Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 10 00:29:27.193456 containerd[1551]: time="2025-07-10T00:29:27.193425714Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:27.196040 containerd[1551]: time="2025-07-10T00:29:27.195672847Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:27.196640 containerd[1551]: time="2025-07-10T00:29:27.196603716Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.749554302s" Jul 10 00:29:27.196696 containerd[1551]: time="2025-07-10T00:29:27.196639116Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 10 00:29:27.199700 containerd[1551]: time="2025-07-10T00:29:27.199665520Z" level=info msg="CreateContainer within sandbox \"3d8e8d7441b97e38978084f68f1eb2449eaca68ab964e8682ba86bdb57968e83\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 00:29:27.235537 containerd[1551]: time="2025-07-10T00:29:27.235436013Z" level=info msg="CreateContainer within sandbox \"3d8e8d7441b97e38978084f68f1eb2449eaca68ab964e8682ba86bdb57968e83\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8b976d5f882007446e140c0719ae66ce1a148f44086cb05794b4d7476f799566\"" Jul 10 00:29:27.237068 containerd[1551]: time="2025-07-10T00:29:27.236142604Z" level=info msg="StartContainer for \"8b976d5f882007446e140c0719ae66ce1a148f44086cb05794b4d7476f799566\"" Jul 10 00:29:27.291885 containerd[1551]: time="2025-07-10T00:29:27.289568767Z" level=info msg="StartContainer for \"8b976d5f882007446e140c0719ae66ce1a148f44086cb05794b4d7476f799566\" returns successfully" Jul 10 00:29:27.953126 kubelet[2626]: E0710 00:29:27.953084 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:30.620935 kubelet[2626]: E0710 00:29:30.620903 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:30.643602 kubelet[2626]: I0710 00:29:30.643535 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-mg7fj" podStartSLOduration=3.891719747 podStartE2EDuration="5.64351966s" podCreationTimestamp="2025-07-10 00:29:25 +0000 UTC" firstStartedPulling="2025-07-10 00:29:25.445666993 +0000 UTC m=+6.602276376" lastFinishedPulling="2025-07-10 00:29:27.197466866 +0000 UTC m=+8.354076289" observedRunningTime="2025-07-10 00:29:27.962038786 +0000 UTC m=+9.118648209" watchObservedRunningTime="2025-07-10 00:29:30.64351966 +0000 UTC m=+11.800129043" Jul 10 00:29:33.014498 sudo[1752]: pam_unix(sudo:session): session closed for user root Jul 10 00:29:33.023048 sshd[1745]: pam_unix(sshd:session): session closed for user core Jul 10 00:29:33.028800 systemd[1]: sshd@6-10.0.0.65:22-10.0.0.1:56200.service: Deactivated successfully. Jul 10 00:29:33.031212 systemd-logind[1524]: Session 7 logged out. Waiting for processes to exit. Jul 10 00:29:33.031315 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 00:29:33.040758 systemd-logind[1524]: Removed session 7. Jul 10 00:29:33.655368 update_engine[1530]: I20250710 00:29:33.654140 1530 update_attempter.cc:509] Updating boot flags... Jul 10 00:29:33.703979 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3035) Jul 10 00:29:33.734423 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3034) Jul 10 00:29:38.381444 kubelet[2626]: W0710 00:29:38.381374 2626 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jul 10 00:29:38.381843 kubelet[2626]: E0710 00:29:38.381453 2626 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 10 00:29:38.555755 kubelet[2626]: I0710 00:29:38.554885 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4810c7cf-cf9e-4a69-8589-61f3df54b67d-typha-certs\") pod \"calico-typha-77db9ddbfd-kwg4h\" (UID: \"4810c7cf-cf9e-4a69-8589-61f3df54b67d\") " pod="calico-system/calico-typha-77db9ddbfd-kwg4h" Jul 10 00:29:38.555755 kubelet[2626]: I0710 00:29:38.554948 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4810c7cf-cf9e-4a69-8589-61f3df54b67d-tigera-ca-bundle\") pod \"calico-typha-77db9ddbfd-kwg4h\" (UID: \"4810c7cf-cf9e-4a69-8589-61f3df54b67d\") " pod="calico-system/calico-typha-77db9ddbfd-kwg4h" Jul 10 00:29:38.555755 kubelet[2626]: I0710 00:29:38.554970 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggjz2\" (UniqueName: \"kubernetes.io/projected/4810c7cf-cf9e-4a69-8589-61f3df54b67d-kube-api-access-ggjz2\") pod \"calico-typha-77db9ddbfd-kwg4h\" (UID: \"4810c7cf-cf9e-4a69-8589-61f3df54b67d\") " pod="calico-system/calico-typha-77db9ddbfd-kwg4h" Jul 10 00:29:38.656917 kubelet[2626]: I0710 00:29:38.656805 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-tigera-ca-bundle\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657018 kubelet[2626]: I0710 00:29:38.656861 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnwwc\" (UniqueName: \"kubernetes.io/projected/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-kube-api-access-nnwwc\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657044 kubelet[2626]: I0710 00:29:38.657031 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-node-certs\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657076 kubelet[2626]: I0710 00:29:38.657051 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-flexvol-driver-host\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657076 kubelet[2626]: I0710 00:29:38.657068 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-policysync\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657121 kubelet[2626]: I0710 00:29:38.657093 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-cni-bin-dir\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657121 kubelet[2626]: I0710 00:29:38.657110 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-cni-net-dir\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657161 kubelet[2626]: I0710 00:29:38.657125 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-xtables-lock\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657161 kubelet[2626]: I0710 00:29:38.657143 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-cni-log-dir\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657161 kubelet[2626]: I0710 00:29:38.657158 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-var-lib-calico\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657225 kubelet[2626]: I0710 00:29:38.657187 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-lib-modules\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.657225 kubelet[2626]: I0710 00:29:38.657202 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/db0acd58-79cc-4c44-89f0-47c9d93c6b6f-var-run-calico\") pod \"calico-node-4tg4l\" (UID: \"db0acd58-79cc-4c44-89f0-47c9d93c6b6f\") " pod="calico-system/calico-node-4tg4l" Jul 10 00:29:38.761082 kubelet[2626]: E0710 00:29:38.760985 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.761082 kubelet[2626]: W0710 00:29:38.761008 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.761082 kubelet[2626]: E0710 00:29:38.761033 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.771400 kubelet[2626]: E0710 00:29:38.771322 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.771400 kubelet[2626]: W0710 00:29:38.771340 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.771400 kubelet[2626]: E0710 00:29:38.771367 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.856712 containerd[1551]: time="2025-07-10T00:29:38.856396296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4tg4l,Uid:db0acd58-79cc-4c44-89f0-47c9d93c6b6f,Namespace:calico-system,Attempt:0,}" Jul 10 00:29:38.862595 kubelet[2626]: E0710 00:29:38.862471 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.862595 kubelet[2626]: W0710 00:29:38.862586 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.862727 kubelet[2626]: E0710 00:29:38.862609 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.864664 kubelet[2626]: E0710 00:29:38.864604 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqs9d" podUID="3ab91194-b6c2-41a0-9cec-3c4e398dcbbf" Jul 10 00:29:38.883576 kubelet[2626]: E0710 00:29:38.883537 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.883576 kubelet[2626]: W0710 00:29:38.883564 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.883576 kubelet[2626]: E0710 00:29:38.883586 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.884444 kubelet[2626]: E0710 00:29:38.884423 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.884444 kubelet[2626]: W0710 00:29:38.884438 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.884582 kubelet[2626]: E0710 00:29:38.884452 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.884797 kubelet[2626]: E0710 00:29:38.884781 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.884797 kubelet[2626]: W0710 00:29:38.884795 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.884874 kubelet[2626]: E0710 00:29:38.884806 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.889626 kubelet[2626]: E0710 00:29:38.889604 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.889626 kubelet[2626]: W0710 00:29:38.889623 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.889735 kubelet[2626]: E0710 00:29:38.889637 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.892118 kubelet[2626]: E0710 00:29:38.891451 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.892118 kubelet[2626]: W0710 00:29:38.891479 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.892118 kubelet[2626]: E0710 00:29:38.891492 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.892118 kubelet[2626]: E0710 00:29:38.891969 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.892118 kubelet[2626]: W0710 00:29:38.891980 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.892118 kubelet[2626]: E0710 00:29:38.891991 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.894910 kubelet[2626]: E0710 00:29:38.894889 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.894910 kubelet[2626]: W0710 00:29:38.894905 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.895488 kubelet[2626]: E0710 00:29:38.894918 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.895928 kubelet[2626]: E0710 00:29:38.895911 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.895928 kubelet[2626]: W0710 00:29:38.895928 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.896015 kubelet[2626]: E0710 00:29:38.895942 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.896691 kubelet[2626]: E0710 00:29:38.896676 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.896731 kubelet[2626]: W0710 00:29:38.896693 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.896731 kubelet[2626]: E0710 00:29:38.896705 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.896915 kubelet[2626]: E0710 00:29:38.896891 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.896915 kubelet[2626]: W0710 00:29:38.896903 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.896915 kubelet[2626]: E0710 00:29:38.896912 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.897075 kubelet[2626]: E0710 00:29:38.897065 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.897075 kubelet[2626]: W0710 00:29:38.897075 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.897125 kubelet[2626]: E0710 00:29:38.897084 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.897224 kubelet[2626]: E0710 00:29:38.897214 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.897224 kubelet[2626]: W0710 00:29:38.897224 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.897284 kubelet[2626]: E0710 00:29:38.897232 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.899598 kubelet[2626]: E0710 00:29:38.898328 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.899598 kubelet[2626]: W0710 00:29:38.898350 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.899598 kubelet[2626]: E0710 00:29:38.898373 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.900189 kubelet[2626]: E0710 00:29:38.900167 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.900189 kubelet[2626]: W0710 00:29:38.900185 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.900262 kubelet[2626]: E0710 00:29:38.900196 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.901485 kubelet[2626]: E0710 00:29:38.901450 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.901587 kubelet[2626]: W0710 00:29:38.901575 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.901651 kubelet[2626]: E0710 00:29:38.901630 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.901902 kubelet[2626]: E0710 00:29:38.901890 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.902009 kubelet[2626]: W0710 00:29:38.901996 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.902137 kubelet[2626]: E0710 00:29:38.902126 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.903526 kubelet[2626]: E0710 00:29:38.903508 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.904417 kubelet[2626]: W0710 00:29:38.904398 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.904522 kubelet[2626]: E0710 00:29:38.904510 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.904771 kubelet[2626]: E0710 00:29:38.904760 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.904841 kubelet[2626]: W0710 00:29:38.904829 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.904900 kubelet[2626]: E0710 00:29:38.904889 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.905093 kubelet[2626]: E0710 00:29:38.905082 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.905158 kubelet[2626]: W0710 00:29:38.905147 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.905209 kubelet[2626]: E0710 00:29:38.905200 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.905485 kubelet[2626]: E0710 00:29:38.905425 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.905565 kubelet[2626]: W0710 00:29:38.905552 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.905623 kubelet[2626]: E0710 00:29:38.905612 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.914222 containerd[1551]: time="2025-07-10T00:29:38.914082398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:38.914222 containerd[1551]: time="2025-07-10T00:29:38.914163718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:38.914222 containerd[1551]: time="2025-07-10T00:29:38.914188038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:38.914323 containerd[1551]: time="2025-07-10T00:29:38.914285037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:38.950892 containerd[1551]: time="2025-07-10T00:29:38.950842583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4tg4l,Uid:db0acd58-79cc-4c44-89f0-47c9d93c6b6f,Namespace:calico-system,Attempt:0,} returns sandbox id \"5894922958e9360652a4423eda767ac2b658cb5f3602ba6f3bb2a5dbcb796ef1\"" Jul 10 00:29:38.953985 containerd[1551]: time="2025-07-10T00:29:38.953960484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 00:29:38.991449 kubelet[2626]: E0710 00:29:38.991423 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.991607 kubelet[2626]: W0710 00:29:38.991592 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.991668 kubelet[2626]: E0710 00:29:38.991656 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.992424 kubelet[2626]: E0710 00:29:38.992228 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.992424 kubelet[2626]: W0710 00:29:38.992245 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.992424 kubelet[2626]: E0710 00:29:38.992258 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.992424 kubelet[2626]: I0710 00:29:38.992285 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3ab91194-b6c2-41a0-9cec-3c4e398dcbbf-varrun\") pod \"csi-node-driver-qqs9d\" (UID: \"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf\") " pod="calico-system/csi-node-driver-qqs9d" Jul 10 00:29:38.992748 kubelet[2626]: E0710 00:29:38.992730 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.992846 kubelet[2626]: W0710 00:29:38.992833 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.992914 kubelet[2626]: E0710 00:29:38.992904 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.993099 kubelet[2626]: I0710 00:29:38.993078 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jdm6\" (UniqueName: \"kubernetes.io/projected/3ab91194-b6c2-41a0-9cec-3c4e398dcbbf-kube-api-access-7jdm6\") pod \"csi-node-driver-qqs9d\" (UID: \"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf\") " pod="calico-system/csi-node-driver-qqs9d" Jul 10 00:29:38.993322 kubelet[2626]: E0710 00:29:38.993269 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.993322 kubelet[2626]: W0710 00:29:38.993280 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.993322 kubelet[2626]: E0710 00:29:38.993291 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.993660 kubelet[2626]: E0710 00:29:38.993612 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.993660 kubelet[2626]: W0710 00:29:38.993625 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.993660 kubelet[2626]: E0710 00:29:38.993642 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.994624 kubelet[2626]: E0710 00:29:38.994444 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.994624 kubelet[2626]: W0710 00:29:38.994479 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.994624 kubelet[2626]: E0710 00:29:38.994496 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.994624 kubelet[2626]: I0710 00:29:38.994514 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ab91194-b6c2-41a0-9cec-3c4e398dcbbf-kubelet-dir\") pod \"csi-node-driver-qqs9d\" (UID: \"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf\") " pod="calico-system/csi-node-driver-qqs9d" Jul 10 00:29:38.994859 kubelet[2626]: E0710 00:29:38.994835 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.995078 kubelet[2626]: W0710 00:29:38.994905 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.995321 kubelet[2626]: E0710 00:29:38.995239 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.995502 kubelet[2626]: I0710 00:29:38.995394 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3ab91194-b6c2-41a0-9cec-3c4e398dcbbf-registration-dir\") pod \"csi-node-driver-qqs9d\" (UID: \"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf\") " pod="calico-system/csi-node-driver-qqs9d" Jul 10 00:29:38.995502 kubelet[2626]: E0710 00:29:38.995472 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.995502 kubelet[2626]: W0710 00:29:38.995481 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.996870 kubelet[2626]: E0710 00:29:38.995624 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.996998 kubelet[2626]: E0710 00:29:38.996979 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.997255 kubelet[2626]: W0710 00:29:38.997108 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.997553 kubelet[2626]: E0710 00:29:38.997478 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.997859 kubelet[2626]: E0710 00:29:38.997759 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.997859 kubelet[2626]: W0710 00:29:38.997792 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.998169 kubelet[2626]: E0710 00:29:38.997856 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.998302 kubelet[2626]: E0710 00:29:38.998287 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.998561 kubelet[2626]: W0710 00:29:38.998482 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.998733 kubelet[2626]: E0710 00:29:38.998716 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.998835 kubelet[2626]: I0710 00:29:38.998809 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3ab91194-b6c2-41a0-9cec-3c4e398dcbbf-socket-dir\") pod \"csi-node-driver-qqs9d\" (UID: \"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf\") " pod="calico-system/csi-node-driver-qqs9d" Jul 10 00:29:38.999103 kubelet[2626]: E0710 00:29:38.998991 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.999103 kubelet[2626]: W0710 00:29:38.999004 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.999103 kubelet[2626]: E0710 00:29:38.999037 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.999283 kubelet[2626]: E0710 00:29:38.999271 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:38.999418 kubelet[2626]: W0710 00:29:38.999332 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:38.999418 kubelet[2626]: E0710 00:29:38.999345 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:38.999830 kubelet[2626]: E0710 00:29:38.999816 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.000034 kubelet[2626]: W0710 00:29:38.999948 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.000034 kubelet[2626]: E0710 00:29:38.999974 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.000759 kubelet[2626]: E0710 00:29:39.000655 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.000759 kubelet[2626]: W0710 00:29:39.000668 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.000759 kubelet[2626]: E0710 00:29:39.000680 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.001084 kubelet[2626]: E0710 00:29:39.001069 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.001332 kubelet[2626]: W0710 00:29:39.001316 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.001465 kubelet[2626]: E0710 00:29:39.001383 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.100373 kubelet[2626]: E0710 00:29:39.100339 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.100373 kubelet[2626]: W0710 00:29:39.100371 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.100526 kubelet[2626]: E0710 00:29:39.100391 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.100591 kubelet[2626]: E0710 00:29:39.100578 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.100591 kubelet[2626]: W0710 00:29:39.100589 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.100645 kubelet[2626]: E0710 00:29:39.100599 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.100763 kubelet[2626]: E0710 00:29:39.100753 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.100785 kubelet[2626]: W0710 00:29:39.100763 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.100785 kubelet[2626]: E0710 00:29:39.100779 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.100998 kubelet[2626]: E0710 00:29:39.100987 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.101025 kubelet[2626]: W0710 00:29:39.100998 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.101025 kubelet[2626]: E0710 00:29:39.101012 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.101189 kubelet[2626]: E0710 00:29:39.101177 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.101214 kubelet[2626]: W0710 00:29:39.101189 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.101214 kubelet[2626]: E0710 00:29:39.101201 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.101678 kubelet[2626]: E0710 00:29:39.101620 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.101678 kubelet[2626]: W0710 00:29:39.101634 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.101678 kubelet[2626]: E0710 00:29:39.101647 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.101874 kubelet[2626]: E0710 00:29:39.101861 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.101874 kubelet[2626]: W0710 00:29:39.101873 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.102397 kubelet[2626]: E0710 00:29:39.101912 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.102397 kubelet[2626]: E0710 00:29:39.102031 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.102397 kubelet[2626]: W0710 00:29:39.102039 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.102397 kubelet[2626]: E0710 00:29:39.102108 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.102397 kubelet[2626]: E0710 00:29:39.102199 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.102397 kubelet[2626]: W0710 00:29:39.102206 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.102397 kubelet[2626]: E0710 00:29:39.102228 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.102397 kubelet[2626]: E0710 00:29:39.102350 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.102397 kubelet[2626]: W0710 00:29:39.102364 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.102397 kubelet[2626]: E0710 00:29:39.102402 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.102612 kubelet[2626]: E0710 00:29:39.102533 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.102612 kubelet[2626]: W0710 00:29:39.102540 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.102612 kubelet[2626]: E0710 00:29:39.102577 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.103380 kubelet[2626]: E0710 00:29:39.102713 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.103380 kubelet[2626]: W0710 00:29:39.102724 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.103380 kubelet[2626]: E0710 00:29:39.102735 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.103380 kubelet[2626]: E0710 00:29:39.102881 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.103380 kubelet[2626]: W0710 00:29:39.102888 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.103380 kubelet[2626]: E0710 00:29:39.102896 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.109704 kubelet[2626]: E0710 00:29:39.109668 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.109704 kubelet[2626]: W0710 00:29:39.109689 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.109704 kubelet[2626]: E0710 00:29:39.109707 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.109932 kubelet[2626]: E0710 00:29:39.109907 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.109932 kubelet[2626]: W0710 00:29:39.109918 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.109932 kubelet[2626]: E0710 00:29:39.109930 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.110118 kubelet[2626]: E0710 00:29:39.110099 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.110118 kubelet[2626]: W0710 00:29:39.110109 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.110171 kubelet[2626]: E0710 00:29:39.110132 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.110265 kubelet[2626]: E0710 00:29:39.110255 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.110265 kubelet[2626]: W0710 00:29:39.110264 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.110304 kubelet[2626]: E0710 00:29:39.110282 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.110532 kubelet[2626]: E0710 00:29:39.110513 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.110532 kubelet[2626]: W0710 00:29:39.110524 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.110588 kubelet[2626]: E0710 00:29:39.110551 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.110678 kubelet[2626]: E0710 00:29:39.110668 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.110678 kubelet[2626]: W0710 00:29:39.110677 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.110718 kubelet[2626]: E0710 00:29:39.110702 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.110869 kubelet[2626]: E0710 00:29:39.110857 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.110869 kubelet[2626]: W0710 00:29:39.110866 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.110932 kubelet[2626]: E0710 00:29:39.110880 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.111042 kubelet[2626]: E0710 00:29:39.111031 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.111042 kubelet[2626]: W0710 00:29:39.111041 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.111096 kubelet[2626]: E0710 00:29:39.111052 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.111441 kubelet[2626]: E0710 00:29:39.111420 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.111441 kubelet[2626]: W0710 00:29:39.111439 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.111513 kubelet[2626]: E0710 00:29:39.111468 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.111699 kubelet[2626]: E0710 00:29:39.111684 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.111699 kubelet[2626]: W0710 00:29:39.111697 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.111742 kubelet[2626]: E0710 00:29:39.111710 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.111892 kubelet[2626]: E0710 00:29:39.111880 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.111892 kubelet[2626]: W0710 00:29:39.111892 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.111943 kubelet[2626]: E0710 00:29:39.111907 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.112137 kubelet[2626]: E0710 00:29:39.112122 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.112137 kubelet[2626]: W0710 00:29:39.112134 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.112187 kubelet[2626]: E0710 00:29:39.112147 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.112547 kubelet[2626]: E0710 00:29:39.112402 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.112547 kubelet[2626]: W0710 00:29:39.112417 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.112547 kubelet[2626]: E0710 00:29:39.112432 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.123498 kubelet[2626]: E0710 00:29:39.123474 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.123498 kubelet[2626]: W0710 00:29:39.123493 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.123590 kubelet[2626]: E0710 00:29:39.123508 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.202681 kubelet[2626]: E0710 00:29:39.202578 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.202681 kubelet[2626]: W0710 00:29:39.202601 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.202681 kubelet[2626]: E0710 00:29:39.202621 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.233933 kubelet[2626]: E0710 00:29:39.233896 2626 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 00:29:39.233933 kubelet[2626]: W0710 00:29:39.233917 2626 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 00:29:39.233933 kubelet[2626]: E0710 00:29:39.233934 2626 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 00:29:39.292757 kubelet[2626]: E0710 00:29:39.292716 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:39.293424 containerd[1551]: time="2025-07-10T00:29:39.293379401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77db9ddbfd-kwg4h,Uid:4810c7cf-cf9e-4a69-8589-61f3df54b67d,Namespace:calico-system,Attempt:0,}" Jul 10 00:29:39.311405 containerd[1551]: time="2025-07-10T00:29:39.311246943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:39.311405 containerd[1551]: time="2025-07-10T00:29:39.311310622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:39.311405 containerd[1551]: time="2025-07-10T00:29:39.311325382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:39.312038 containerd[1551]: time="2025-07-10T00:29:39.311980098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:39.369417 containerd[1551]: time="2025-07-10T00:29:39.369377703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77db9ddbfd-kwg4h,Uid:4810c7cf-cf9e-4a69-8589-61f3df54b67d,Namespace:calico-system,Attempt:0,} returns sandbox id \"2858ee8e24c37c724c67231153863c67a7f75f42be27b44176e4588bac7065f6\"" Jul 10 00:29:39.370306 kubelet[2626]: E0710 00:29:39.370283 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:39.937453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697459303.mount: Deactivated successfully. Jul 10 00:29:40.009870 containerd[1551]: time="2025-07-10T00:29:40.009818145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:40.010549 containerd[1551]: time="2025-07-10T00:29:40.010514461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5636360" Jul 10 00:29:40.011338 containerd[1551]: time="2025-07-10T00:29:40.011289617Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:40.014146 containerd[1551]: time="2025-07-10T00:29:40.013949603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:40.014650 containerd[1551]: time="2025-07-10T00:29:40.014619120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.059437843s" Jul 10 00:29:40.014792 containerd[1551]: time="2025-07-10T00:29:40.014704759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 10 00:29:40.015765 containerd[1551]: time="2025-07-10T00:29:40.015741314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 00:29:40.018284 containerd[1551]: time="2025-07-10T00:29:40.018249181Z" level=info msg="CreateContainer within sandbox \"5894922958e9360652a4423eda767ac2b658cb5f3602ba6f3bb2a5dbcb796ef1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 00:29:40.038750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2114972481.mount: Deactivated successfully. Jul 10 00:29:40.040013 containerd[1551]: time="2025-07-10T00:29:40.039966589Z" level=info msg="CreateContainer within sandbox \"5894922958e9360652a4423eda767ac2b658cb5f3602ba6f3bb2a5dbcb796ef1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ad91eff859ac07b2afe5fd6aecf0b722e02a75caefd50aac6dbc671fe31845ca\"" Jul 10 00:29:40.040535 containerd[1551]: time="2025-07-10T00:29:40.040508426Z" level=info msg="StartContainer for \"ad91eff859ac07b2afe5fd6aecf0b722e02a75caefd50aac6dbc671fe31845ca\"" Jul 10 00:29:40.107592 containerd[1551]: time="2025-07-10T00:29:40.107536801Z" level=info msg="StartContainer for \"ad91eff859ac07b2afe5fd6aecf0b722e02a75caefd50aac6dbc671fe31845ca\" returns successfully" Jul 10 00:29:40.157688 containerd[1551]: time="2025-07-10T00:29:40.157620943Z" level=info msg="shim disconnected" id=ad91eff859ac07b2afe5fd6aecf0b722e02a75caefd50aac6dbc671fe31845ca namespace=k8s.io Jul 10 00:29:40.157688 containerd[1551]: time="2025-07-10T00:29:40.157679502Z" level=warning msg="cleaning up after shim disconnected" id=ad91eff859ac07b2afe5fd6aecf0b722e02a75caefd50aac6dbc671fe31845ca namespace=k8s.io Jul 10 00:29:40.157688 containerd[1551]: time="2025-07-10T00:29:40.157688702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:29:40.926294 kubelet[2626]: E0710 00:29:40.924747 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqs9d" podUID="3ab91194-b6c2-41a0-9cec-3c4e398dcbbf" Jul 10 00:29:41.503045 containerd[1551]: time="2025-07-10T00:29:41.502998449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:41.504089 containerd[1551]: time="2025-07-10T00:29:41.503892965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=31717828" Jul 10 00:29:41.504925 containerd[1551]: time="2025-07-10T00:29:41.504885200Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:41.507300 containerd[1551]: time="2025-07-10T00:29:41.507268589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:41.508410 containerd[1551]: time="2025-07-10T00:29:41.508377823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.49260207s" Jul 10 00:29:41.508483 containerd[1551]: time="2025-07-10T00:29:41.508410583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 10 00:29:41.510731 containerd[1551]: time="2025-07-10T00:29:41.509406819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 00:29:41.521926 containerd[1551]: time="2025-07-10T00:29:41.521807679Z" level=info msg="CreateContainer within sandbox \"2858ee8e24c37c724c67231153863c67a7f75f42be27b44176e4588bac7065f6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 00:29:41.534572 containerd[1551]: time="2025-07-10T00:29:41.534529257Z" level=info msg="CreateContainer within sandbox \"2858ee8e24c37c724c67231153863c67a7f75f42be27b44176e4588bac7065f6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0f68cecaa8bd075e2ee28d0ec352b3398b2efb51d63aad54275dcb138f32b887\"" Jul 10 00:29:41.537304 containerd[1551]: time="2025-07-10T00:29:41.536162249Z" level=info msg="StartContainer for \"0f68cecaa8bd075e2ee28d0ec352b3398b2efb51d63aad54275dcb138f32b887\"" Jul 10 00:29:41.714069 containerd[1551]: time="2025-07-10T00:29:41.714020390Z" level=info msg="StartContainer for \"0f68cecaa8bd075e2ee28d0ec352b3398b2efb51d63aad54275dcb138f32b887\" returns successfully" Jul 10 00:29:42.031162 kubelet[2626]: E0710 00:29:42.031064 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:42.922807 kubelet[2626]: E0710 00:29:42.922756 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqs9d" podUID="3ab91194-b6c2-41a0-9cec-3c4e398dcbbf" Jul 10 00:29:43.032913 kubelet[2626]: I0710 00:29:43.032871 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:29:43.033542 kubelet[2626]: E0710 00:29:43.033516 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:43.875917 containerd[1551]: time="2025-07-10T00:29:43.875436518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:43.876432 containerd[1551]: time="2025-07-10T00:29:43.875967476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 10 00:29:43.876983 containerd[1551]: time="2025-07-10T00:29:43.876922992Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:43.879141 containerd[1551]: time="2025-07-10T00:29:43.879109382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:43.880044 containerd[1551]: time="2025-07-10T00:29:43.880016298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.37057612s" Jul 10 00:29:43.880111 containerd[1551]: time="2025-07-10T00:29:43.880055418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 10 00:29:43.882246 containerd[1551]: time="2025-07-10T00:29:43.882215769Z" level=info msg="CreateContainer within sandbox \"5894922958e9360652a4423eda767ac2b658cb5f3602ba6f3bb2a5dbcb796ef1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 00:29:43.903005 containerd[1551]: time="2025-07-10T00:29:43.902864881Z" level=info msg="CreateContainer within sandbox \"5894922958e9360652a4423eda767ac2b658cb5f3602ba6f3bb2a5dbcb796ef1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"374431d0b90234df568743fd54917146807ea3c7e86e6844a1dac3182a24ffc7\"" Jul 10 00:29:43.904066 containerd[1551]: time="2025-07-10T00:29:43.904030156Z" level=info msg="StartContainer for \"374431d0b90234df568743fd54917146807ea3c7e86e6844a1dac3182a24ffc7\"" Jul 10 00:29:43.951635 containerd[1551]: time="2025-07-10T00:29:43.951590554Z" level=info msg="StartContainer for \"374431d0b90234df568743fd54917146807ea3c7e86e6844a1dac3182a24ffc7\" returns successfully" Jul 10 00:29:44.063316 kubelet[2626]: I0710 00:29:44.062960 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77db9ddbfd-kwg4h" podStartSLOduration=3.924638013 podStartE2EDuration="6.062940458s" podCreationTimestamp="2025-07-10 00:29:38 +0000 UTC" firstStartedPulling="2025-07-10 00:29:39.370724615 +0000 UTC m=+20.527334038" lastFinishedPulling="2025-07-10 00:29:41.50902706 +0000 UTC m=+22.665636483" observedRunningTime="2025-07-10 00:29:42.040588103 +0000 UTC m=+23.197197526" watchObservedRunningTime="2025-07-10 00:29:44.062940458 +0000 UTC m=+25.219549841" Jul 10 00:29:44.569914 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-374431d0b90234df568743fd54917146807ea3c7e86e6844a1dac3182a24ffc7-rootfs.mount: Deactivated successfully. Jul 10 00:29:44.610701 containerd[1551]: time="2025-07-10T00:29:44.610635437Z" level=info msg="shim disconnected" id=374431d0b90234df568743fd54917146807ea3c7e86e6844a1dac3182a24ffc7 namespace=k8s.io Jul 10 00:29:44.610701 containerd[1551]: time="2025-07-10T00:29:44.610684477Z" level=warning msg="cleaning up after shim disconnected" id=374431d0b90234df568743fd54917146807ea3c7e86e6844a1dac3182a24ffc7 namespace=k8s.io Jul 10 00:29:44.610701 containerd[1551]: time="2025-07-10T00:29:44.610693157Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 10 00:29:44.611692 kubelet[2626]: I0710 00:29:44.611500 2626 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 00:29:44.841474 kubelet[2626]: I0710 00:29:44.841431 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a75ce03-7b22-4075-a20a-956c07a61ee9-tigera-ca-bundle\") pod \"calico-kube-controllers-7c98c58b9b-vxtxm\" (UID: \"3a75ce03-7b22-4075-a20a-956c07a61ee9\") " pod="calico-system/calico-kube-controllers-7c98c58b9b-vxtxm" Jul 10 00:29:44.841587 kubelet[2626]: I0710 00:29:44.841479 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqfdx\" (UniqueName: \"kubernetes.io/projected/0b394ef9-6acd-4661-b521-8820f934f5ed-kube-api-access-hqfdx\") pod \"calico-apiserver-5d465fcf7d-ksj25\" (UID: \"0b394ef9-6acd-4661-b521-8820f934f5ed\") " pod="calico-apiserver/calico-apiserver-5d465fcf7d-ksj25" Jul 10 00:29:44.841587 kubelet[2626]: I0710 00:29:44.841501 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf8hq\" (UniqueName: \"kubernetes.io/projected/3856e87c-f471-4d8a-8a66-b6670b2d88cd-kube-api-access-jf8hq\") pod \"coredns-7c65d6cfc9-wxx7d\" (UID: \"3856e87c-f471-4d8a-8a66-b6670b2d88cd\") " pod="kube-system/coredns-7c65d6cfc9-wxx7d" Jul 10 00:29:44.841587 kubelet[2626]: I0710 00:29:44.841540 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a625459-29c1-438f-ae9d-de10e2e06fa6-config-volume\") pod \"coredns-7c65d6cfc9-d987s\" (UID: \"3a625459-29c1-438f-ae9d-de10e2e06fa6\") " pod="kube-system/coredns-7c65d6cfc9-d987s" Jul 10 00:29:44.841587 kubelet[2626]: I0710 00:29:44.841558 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2vjx\" (UniqueName: \"kubernetes.io/projected/3a625459-29c1-438f-ae9d-de10e2e06fa6-kube-api-access-p2vjx\") pod \"coredns-7c65d6cfc9-d987s\" (UID: \"3a625459-29c1-438f-ae9d-de10e2e06fa6\") " pod="kube-system/coredns-7c65d6cfc9-d987s" Jul 10 00:29:44.841684 kubelet[2626]: I0710 00:29:44.841576 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/09a904e3-2a27-4aa7-afe0-ae11924a0f3d-calico-apiserver-certs\") pod \"calico-apiserver-5d465fcf7d-bpp6r\" (UID: \"09a904e3-2a27-4aa7-afe0-ae11924a0f3d\") " pod="calico-apiserver/calico-apiserver-5d465fcf7d-bpp6r" Jul 10 00:29:44.841684 kubelet[2626]: I0710 00:29:44.841642 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd-goldmane-key-pair\") pod \"goldmane-58fd7646b9-hhddt\" (UID: \"4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd\") " pod="calico-system/goldmane-58fd7646b9-hhddt" Jul 10 00:29:44.841684 kubelet[2626]: I0710 00:29:44.841657 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0b394ef9-6acd-4661-b521-8820f934f5ed-calico-apiserver-certs\") pod \"calico-apiserver-5d465fcf7d-ksj25\" (UID: \"0b394ef9-6acd-4661-b521-8820f934f5ed\") " pod="calico-apiserver/calico-apiserver-5d465fcf7d-ksj25" Jul 10 00:29:44.841684 kubelet[2626]: I0710 00:29:44.841676 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-whisker-ca-bundle\") pod \"whisker-7969848dbb-st4jn\" (UID: \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\") " pod="calico-system/whisker-7969848dbb-st4jn" Jul 10 00:29:44.841770 kubelet[2626]: I0710 00:29:44.841716 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3856e87c-f471-4d8a-8a66-b6670b2d88cd-config-volume\") pod \"coredns-7c65d6cfc9-wxx7d\" (UID: \"3856e87c-f471-4d8a-8a66-b6670b2d88cd\") " pod="kube-system/coredns-7c65d6cfc9-wxx7d" Jul 10 00:29:44.841770 kubelet[2626]: I0710 00:29:44.841764 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd-config\") pod \"goldmane-58fd7646b9-hhddt\" (UID: \"4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd\") " pod="calico-system/goldmane-58fd7646b9-hhddt" Jul 10 00:29:44.841817 kubelet[2626]: I0710 00:29:44.841789 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shnt6\" (UniqueName: \"kubernetes.io/projected/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-kube-api-access-shnt6\") pod \"whisker-7969848dbb-st4jn\" (UID: \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\") " pod="calico-system/whisker-7969848dbb-st4jn" Jul 10 00:29:44.841817 kubelet[2626]: I0710 00:29:44.841814 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t695p\" (UniqueName: \"kubernetes.io/projected/4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd-kube-api-access-t695p\") pod \"goldmane-58fd7646b9-hhddt\" (UID: \"4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd\") " pod="calico-system/goldmane-58fd7646b9-hhddt" Jul 10 00:29:44.841860 kubelet[2626]: I0710 00:29:44.841854 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-whisker-backend-key-pair\") pod \"whisker-7969848dbb-st4jn\" (UID: \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\") " pod="calico-system/whisker-7969848dbb-st4jn" Jul 10 00:29:44.841883 kubelet[2626]: I0710 00:29:44.841872 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l62n6\" (UniqueName: \"kubernetes.io/projected/3a75ce03-7b22-4075-a20a-956c07a61ee9-kube-api-access-l62n6\") pod \"calico-kube-controllers-7c98c58b9b-vxtxm\" (UID: \"3a75ce03-7b22-4075-a20a-956c07a61ee9\") " pod="calico-system/calico-kube-controllers-7c98c58b9b-vxtxm" Jul 10 00:29:44.841909 kubelet[2626]: I0710 00:29:44.841893 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4s7j\" (UniqueName: \"kubernetes.io/projected/09a904e3-2a27-4aa7-afe0-ae11924a0f3d-kube-api-access-g4s7j\") pod \"calico-apiserver-5d465fcf7d-bpp6r\" (UID: \"09a904e3-2a27-4aa7-afe0-ae11924a0f3d\") " pod="calico-apiserver/calico-apiserver-5d465fcf7d-bpp6r" Jul 10 00:29:44.841935 kubelet[2626]: I0710 00:29:44.841912 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-hhddt\" (UID: \"4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd\") " pod="calico-system/goldmane-58fd7646b9-hhddt" Jul 10 00:29:44.924670 containerd[1551]: time="2025-07-10T00:29:44.924626467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqs9d,Uid:3ab91194-b6c2-41a0-9cec-3c4e398dcbbf,Namespace:calico-system,Attempt:0,}" Jul 10 00:29:44.979842 containerd[1551]: time="2025-07-10T00:29:44.979809127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465fcf7d-ksj25,Uid:0b394ef9-6acd-4661-b521-8820f934f5ed,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:29:44.996726 containerd[1551]: time="2025-07-10T00:29:44.996689340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7969848dbb-st4jn,Uid:caebe2b5-f7de-49d7-9d6f-4c1c8ce15005,Namespace:calico-system,Attempt:0,}" Jul 10 00:29:45.000622 kubelet[2626]: E0710 00:29:45.000589 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:45.001911 containerd[1551]: time="2025-07-10T00:29:45.000981363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d987s,Uid:3a625459-29c1-438f-ae9d-de10e2e06fa6,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:45.002600 containerd[1551]: time="2025-07-10T00:29:45.001101802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hhddt,Uid:4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd,Namespace:calico-system,Attempt:0,}" Jul 10 00:29:45.062080 containerd[1551]: time="2025-07-10T00:29:45.061855775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 00:29:45.177998 containerd[1551]: time="2025-07-10T00:29:45.177875342Z" level=error msg="Failed to destroy network for sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.194003 containerd[1551]: time="2025-07-10T00:29:45.179508816Z" level=error msg="encountered an error cleaning up failed sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.194139 containerd[1551]: time="2025-07-10T00:29:45.194021122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465fcf7d-ksj25,Uid:0b394ef9-6acd-4661-b521-8820f934f5ed,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.194139 containerd[1551]: time="2025-07-10T00:29:45.186946509Z" level=error msg="Failed to destroy network for sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.194232 containerd[1551]: time="2025-07-10T00:29:45.186947829Z" level=error msg="Failed to destroy network for sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.194476 kubelet[2626]: E0710 00:29:45.194438 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.194783 containerd[1551]: time="2025-07-10T00:29:45.194656680Z" level=error msg="encountered an error cleaning up failed sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.194783 containerd[1551]: time="2025-07-10T00:29:45.194702040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqs9d,Uid:3ab91194-b6c2-41a0-9cec-3c4e398dcbbf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.194870 kubelet[2626]: E0710 00:29:45.194836 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.195062 kubelet[2626]: E0710 00:29:45.195035 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqs9d" Jul 10 00:29:45.195086 kubelet[2626]: E0710 00:29:45.195070 2626 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqs9d" Jul 10 00:29:45.195150 kubelet[2626]: E0710 00:29:45.195122 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qqs9d_calico-system(3ab91194-b6c2-41a0-9cec-3c4e398dcbbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qqs9d_calico-system(3ab91194-b6c2-41a0-9cec-3c4e398dcbbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qqs9d" podUID="3ab91194-b6c2-41a0-9cec-3c4e398dcbbf" Jul 10 00:29:45.196620 containerd[1551]: time="2025-07-10T00:29:45.196535393Z" level=error msg="encountered an error cleaning up failed sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.196620 containerd[1551]: time="2025-07-10T00:29:45.196583673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7969848dbb-st4jn,Uid:caebe2b5-f7de-49d7-9d6f-4c1c8ce15005,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.196931 kubelet[2626]: E0710 00:29:45.196903 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.196994 kubelet[2626]: E0710 00:29:45.196939 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7969848dbb-st4jn" Jul 10 00:29:45.197029 kubelet[2626]: E0710 00:29:45.196987 2626 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7969848dbb-st4jn" Jul 10 00:29:45.198235 kubelet[2626]: E0710 00:29:45.197060 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7969848dbb-st4jn_calico-system(caebe2b5-f7de-49d7-9d6f-4c1c8ce15005)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7969848dbb-st4jn_calico-system(caebe2b5-f7de-49d7-9d6f-4c1c8ce15005)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7969848dbb-st4jn" podUID="caebe2b5-f7de-49d7-9d6f-4c1c8ce15005" Jul 10 00:29:45.198235 kubelet[2626]: E0710 00:29:45.197314 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d465fcf7d-ksj25" Jul 10 00:29:45.198235 kubelet[2626]: E0710 00:29:45.197351 2626 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d465fcf7d-ksj25" Jul 10 00:29:45.198400 kubelet[2626]: E0710 00:29:45.197421 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d465fcf7d-ksj25_calico-apiserver(0b394ef9-6acd-4661-b521-8820f934f5ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d465fcf7d-ksj25_calico-apiserver(0b394ef9-6acd-4661-b521-8820f934f5ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d465fcf7d-ksj25" podUID="0b394ef9-6acd-4661-b521-8820f934f5ed" Jul 10 00:29:45.201385 containerd[1551]: time="2025-07-10T00:29:45.201327415Z" level=error msg="Failed to destroy network for sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.201673 containerd[1551]: time="2025-07-10T00:29:45.201637414Z" level=error msg="encountered an error cleaning up failed sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.201730 containerd[1551]: time="2025-07-10T00:29:45.201680294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d987s,Uid:3a625459-29c1-438f-ae9d-de10e2e06fa6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.201849 kubelet[2626]: E0710 00:29:45.201810 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.201903 kubelet[2626]: E0710 00:29:45.201853 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d987s" Jul 10 00:29:45.201903 kubelet[2626]: E0710 00:29:45.201869 2626 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-d987s" Jul 10 00:29:45.201961 kubelet[2626]: E0710 00:29:45.201903 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-d987s_kube-system(3a625459-29c1-438f-ae9d-de10e2e06fa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-d987s_kube-system(3a625459-29c1-438f-ae9d-de10e2e06fa6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-d987s" podUID="3a625459-29c1-438f-ae9d-de10e2e06fa6" Jul 10 00:29:45.203391 containerd[1551]: time="2025-07-10T00:29:45.203267648Z" level=error msg="Failed to destroy network for sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.203587 containerd[1551]: time="2025-07-10T00:29:45.203559407Z" level=error msg="encountered an error cleaning up failed sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.203651 containerd[1551]: time="2025-07-10T00:29:45.203602526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hhddt,Uid:4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.203854 kubelet[2626]: E0710 00:29:45.203756 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.203854 kubelet[2626]: E0710 00:29:45.203797 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-hhddt" Jul 10 00:29:45.203854 kubelet[2626]: E0710 00:29:45.203812 2626 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-hhddt" Jul 10 00:29:45.203997 kubelet[2626]: E0710 00:29:45.203887 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-hhddt_calico-system(4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-hhddt_calico-system(4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-hhddt" podUID="4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd" Jul 10 00:29:45.268012 containerd[1551]: time="2025-07-10T00:29:45.267973006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c98c58b9b-vxtxm,Uid:3a75ce03-7b22-4075-a20a-956c07a61ee9,Namespace:calico-system,Attempt:0,}" Jul 10 00:29:45.270393 kubelet[2626]: E0710 00:29:45.270322 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:45.270737 containerd[1551]: time="2025-07-10T00:29:45.270706916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wxx7d,Uid:3856e87c-f471-4d8a-8a66-b6670b2d88cd,Namespace:kube-system,Attempt:0,}" Jul 10 00:29:45.279111 containerd[1551]: time="2025-07-10T00:29:45.278942085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465fcf7d-bpp6r,Uid:09a904e3-2a27-4aa7-afe0-ae11924a0f3d,Namespace:calico-apiserver,Attempt:0,}" Jul 10 00:29:45.338139 containerd[1551]: time="2025-07-10T00:29:45.338068344Z" level=error msg="Failed to destroy network for sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.338473 containerd[1551]: time="2025-07-10T00:29:45.338344303Z" level=error msg="Failed to destroy network for sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.338473 containerd[1551]: time="2025-07-10T00:29:45.338422383Z" level=error msg="encountered an error cleaning up failed sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.338551 containerd[1551]: time="2025-07-10T00:29:45.338475623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wxx7d,Uid:3856e87c-f471-4d8a-8a66-b6670b2d88cd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.338877 kubelet[2626]: E0710 00:29:45.338706 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.338877 kubelet[2626]: E0710 00:29:45.338778 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wxx7d" Jul 10 00:29:45.338877 kubelet[2626]: E0710 00:29:45.338796 2626 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wxx7d" Jul 10 00:29:45.338996 containerd[1551]: time="2025-07-10T00:29:45.338760182Z" level=error msg="encountered an error cleaning up failed sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.338996 containerd[1551]: time="2025-07-10T00:29:45.338798022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c98c58b9b-vxtxm,Uid:3a75ce03-7b22-4075-a20a-956c07a61ee9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.339072 kubelet[2626]: E0710 00:29:45.338840 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wxx7d_kube-system(3856e87c-f471-4d8a-8a66-b6670b2d88cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wxx7d_kube-system(3856e87c-f471-4d8a-8a66-b6670b2d88cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wxx7d" podUID="3856e87c-f471-4d8a-8a66-b6670b2d88cd" Jul 10 00:29:45.339135 kubelet[2626]: E0710 00:29:45.339106 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.339172 kubelet[2626]: E0710 00:29:45.339149 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c98c58b9b-vxtxm" Jul 10 00:29:45.340427 kubelet[2626]: E0710 00:29:45.339164 2626 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c98c58b9b-vxtxm" Jul 10 00:29:45.340520 kubelet[2626]: E0710 00:29:45.340463 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c98c58b9b-vxtxm_calico-system(3a75ce03-7b22-4075-a20a-956c07a61ee9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c98c58b9b-vxtxm_calico-system(3a75ce03-7b22-4075-a20a-956c07a61ee9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c98c58b9b-vxtxm" podUID="3a75ce03-7b22-4075-a20a-956c07a61ee9" Jul 10 00:29:45.343994 containerd[1551]: time="2025-07-10T00:29:45.343945602Z" level=error msg="Failed to destroy network for sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.344240 containerd[1551]: time="2025-07-10T00:29:45.344203642Z" level=error msg="encountered an error cleaning up failed sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.344273 containerd[1551]: time="2025-07-10T00:29:45.344244481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465fcf7d-bpp6r,Uid:09a904e3-2a27-4aa7-afe0-ae11924a0f3d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.344529 kubelet[2626]: E0710 00:29:45.344495 2626 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:45.344566 kubelet[2626]: E0710 00:29:45.344536 2626 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d465fcf7d-bpp6r" Jul 10 00:29:45.344597 kubelet[2626]: E0710 00:29:45.344556 2626 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d465fcf7d-bpp6r" Jul 10 00:29:45.344625 kubelet[2626]: E0710 00:29:45.344608 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d465fcf7d-bpp6r_calico-apiserver(09a904e3-2a27-4aa7-afe0-ae11924a0f3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d465fcf7d-bpp6r_calico-apiserver(09a904e3-2a27-4aa7-afe0-ae11924a0f3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d465fcf7d-bpp6r" podUID="09a904e3-2a27-4aa7-afe0-ae11924a0f3d" Jul 10 00:29:45.943850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f-shm.mount: Deactivated successfully. Jul 10 00:29:46.059497 kubelet[2626]: I0710 00:29:46.059445 2626 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:29:46.060403 containerd[1551]: time="2025-07-10T00:29:46.060264062Z" level=info msg="StopPodSandbox for \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\"" Jul 10 00:29:46.061043 containerd[1551]: time="2025-07-10T00:29:46.060466942Z" level=info msg="Ensure that sandbox 5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883 in task-service has been cleanup successfully" Jul 10 00:29:46.061771 kubelet[2626]: I0710 00:29:46.061743 2626 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:29:46.062472 containerd[1551]: time="2025-07-10T00:29:46.062321135Z" level=info msg="StopPodSandbox for \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\"" Jul 10 00:29:46.062586 containerd[1551]: time="2025-07-10T00:29:46.062561014Z" level=info msg="Ensure that sandbox d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f in task-service has been cleanup successfully" Jul 10 00:29:46.063415 kubelet[2626]: I0710 00:29:46.063182 2626 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:29:46.064654 containerd[1551]: time="2025-07-10T00:29:46.064627527Z" level=info msg="StopPodSandbox for \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\"" Jul 10 00:29:46.064778 containerd[1551]: time="2025-07-10T00:29:46.064761087Z" level=info msg="Ensure that sandbox f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7 in task-service has been cleanup successfully" Jul 10 00:29:46.080388 kubelet[2626]: I0710 00:29:46.079184 2626 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:29:46.080912 containerd[1551]: time="2025-07-10T00:29:46.080858470Z" level=info msg="StopPodSandbox for \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\"" Jul 10 00:29:46.081443 containerd[1551]: time="2025-07-10T00:29:46.081129229Z" level=info msg="Ensure that sandbox 0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0 in task-service has been cleanup successfully" Jul 10 00:29:46.083440 kubelet[2626]: I0710 00:29:46.083080 2626 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:29:46.085945 containerd[1551]: time="2025-07-10T00:29:46.084805376Z" level=info msg="StopPodSandbox for \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\"" Jul 10 00:29:46.086586 kubelet[2626]: I0710 00:29:46.086032 2626 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:29:46.086676 containerd[1551]: time="2025-07-10T00:29:46.086205691Z" level=info msg="Ensure that sandbox 37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332 in task-service has been cleanup successfully" Jul 10 00:29:46.087313 containerd[1551]: time="2025-07-10T00:29:46.086973089Z" level=info msg="StopPodSandbox for \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\"" Jul 10 00:29:46.088927 kubelet[2626]: I0710 00:29:46.088119 2626 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:29:46.089008 containerd[1551]: time="2025-07-10T00:29:46.087941685Z" level=info msg="Ensure that sandbox 15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8 in task-service has been cleanup successfully" Jul 10 00:29:46.089992 containerd[1551]: time="2025-07-10T00:29:46.089944438Z" level=info msg="StopPodSandbox for \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\"" Jul 10 00:29:46.090186 containerd[1551]: time="2025-07-10T00:29:46.090129598Z" level=info msg="Ensure that sandbox cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb in task-service has been cleanup successfully" Jul 10 00:29:46.092050 kubelet[2626]: I0710 00:29:46.091950 2626 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:29:46.094297 containerd[1551]: time="2025-07-10T00:29:46.093618106Z" level=info msg="StopPodSandbox for \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\"" Jul 10 00:29:46.094297 containerd[1551]: time="2025-07-10T00:29:46.093972744Z" level=info msg="Ensure that sandbox 0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c in task-service has been cleanup successfully" Jul 10 00:29:46.117976 containerd[1551]: time="2025-07-10T00:29:46.117925820Z" level=error msg="StopPodSandbox for \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\" failed" error="failed to destroy network for sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:46.118173 kubelet[2626]: E0710 00:29:46.118135 2626 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:29:46.118245 kubelet[2626]: E0710 00:29:46.118192 2626 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883"} Jul 10 00:29:46.118285 kubelet[2626]: E0710 00:29:46.118256 2626 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b394ef9-6acd-4661-b521-8820f934f5ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:29:46.118348 kubelet[2626]: E0710 00:29:46.118280 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b394ef9-6acd-4661-b521-8820f934f5ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d465fcf7d-ksj25" podUID="0b394ef9-6acd-4661-b521-8820f934f5ed" Jul 10 00:29:46.140590 containerd[1551]: time="2025-07-10T00:29:46.140523741Z" level=error msg="StopPodSandbox for \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\" failed" error="failed to destroy network for sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:46.140809 kubelet[2626]: E0710 00:29:46.140769 2626 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:29:46.140922 kubelet[2626]: E0710 00:29:46.140822 2626 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7"} Jul 10 00:29:46.140922 kubelet[2626]: E0710 00:29:46.140861 2626 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09a904e3-2a27-4aa7-afe0-ae11924a0f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:29:46.140922 kubelet[2626]: E0710 00:29:46.140883 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09a904e3-2a27-4aa7-afe0-ae11924a0f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d465fcf7d-bpp6r" podUID="09a904e3-2a27-4aa7-afe0-ae11924a0f3d" Jul 10 00:29:46.152760 containerd[1551]: time="2025-07-10T00:29:46.152686259Z" level=error msg="StopPodSandbox for \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\" failed" error="failed to destroy network for sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:46.152943 kubelet[2626]: E0710 00:29:46.152904 2626 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:29:46.153020 kubelet[2626]: E0710 00:29:46.152958 2626 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0"} Jul 10 00:29:46.153020 kubelet[2626]: E0710 00:29:46.152990 2626 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:29:46.153020 kubelet[2626]: E0710 00:29:46.153012 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-hhddt" podUID="4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd" Jul 10 00:29:46.154663 containerd[1551]: time="2025-07-10T00:29:46.154487733Z" level=error msg="StopPodSandbox for \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\" failed" error="failed to destroy network for sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:46.156624 kubelet[2626]: E0710 00:29:46.156586 2626 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:29:46.156752 kubelet[2626]: E0710 00:29:46.156649 2626 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8"} Jul 10 00:29:46.156752 kubelet[2626]: E0710 00:29:46.156682 2626 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3856e87c-f471-4d8a-8a66-b6670b2d88cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:29:46.156752 kubelet[2626]: E0710 00:29:46.156724 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3856e87c-f471-4d8a-8a66-b6670b2d88cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wxx7d" podUID="3856e87c-f471-4d8a-8a66-b6670b2d88cd" Jul 10 00:29:46.157862 containerd[1551]: time="2025-07-10T00:29:46.157763561Z" level=error msg="StopPodSandbox for \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\" failed" error="failed to destroy network for sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:46.158015 kubelet[2626]: E0710 00:29:46.157992 2626 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:29:46.158071 kubelet[2626]: E0710 00:29:46.158024 2626 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c"} Jul 10 00:29:46.158071 kubelet[2626]: E0710 00:29:46.158051 2626 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:29:46.158136 kubelet[2626]: E0710 00:29:46.158069 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7969848dbb-st4jn" podUID="caebe2b5-f7de-49d7-9d6f-4c1c8ce15005" Jul 10 00:29:46.164229 containerd[1551]: time="2025-07-10T00:29:46.163240502Z" level=error msg="StopPodSandbox for \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\" failed" error="failed to destroy network for sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:46.164229 containerd[1551]: time="2025-07-10T00:29:46.163466701Z" level=error msg="StopPodSandbox for \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\" failed" error="failed to destroy network for sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:46.164374 kubelet[2626]: E0710 00:29:46.163524 2626 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:29:46.164374 kubelet[2626]: E0710 00:29:46.163568 2626 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332"} Jul 10 00:29:46.164374 kubelet[2626]: E0710 00:29:46.163595 2626 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a75ce03-7b22-4075-a20a-956c07a61ee9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:29:46.164374 kubelet[2626]: E0710 00:29:46.163615 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a75ce03-7b22-4075-a20a-956c07a61ee9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c98c58b9b-vxtxm" podUID="3a75ce03-7b22-4075-a20a-956c07a61ee9" Jul 10 00:29:46.164597 kubelet[2626]: E0710 00:29:46.163729 2626 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:29:46.164597 kubelet[2626]: E0710 00:29:46.163766 2626 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f"} Jul 10 00:29:46.164597 kubelet[2626]: E0710 00:29:46.163820 2626 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:29:46.164597 kubelet[2626]: E0710 00:29:46.163965 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qqs9d" podUID="3ab91194-b6c2-41a0-9cec-3c4e398dcbbf" Jul 10 00:29:46.178769 containerd[1551]: time="2025-07-10T00:29:46.178717848Z" level=error msg="StopPodSandbox for \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\" failed" error="failed to destroy network for sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 00:29:46.178955 kubelet[2626]: E0710 00:29:46.178917 2626 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:29:46.179000 kubelet[2626]: E0710 00:29:46.178966 2626 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb"} Jul 10 00:29:46.179027 kubelet[2626]: E0710 00:29:46.178995 2626 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3a625459-29c1-438f-ae9d-de10e2e06fa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 00:29:46.179079 kubelet[2626]: E0710 00:29:46.179018 2626 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3a625459-29c1-438f-ae9d-de10e2e06fa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-d987s" podUID="3a625459-29c1-438f-ae9d-de10e2e06fa6" Jul 10 00:29:48.994054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839954263.mount: Deactivated successfully. Jul 10 00:29:49.334643 containerd[1551]: time="2025-07-10T00:29:49.334589851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:49.340630 containerd[1551]: time="2025-07-10T00:29:49.340575234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 10 00:29:49.347930 containerd[1551]: time="2025-07-10T00:29:49.347887173Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:49.358128 containerd[1551]: time="2025-07-10T00:29:49.358074864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:49.360213 containerd[1551]: time="2025-07-10T00:29:49.359200460Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.297300205s" Jul 10 00:29:49.360213 containerd[1551]: time="2025-07-10T00:29:49.359260300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 10 00:29:49.373112 containerd[1551]: time="2025-07-10T00:29:49.372631822Z" level=info msg="CreateContainer within sandbox \"5894922958e9360652a4423eda767ac2b658cb5f3602ba6f3bb2a5dbcb796ef1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 00:29:49.427880 containerd[1551]: time="2025-07-10T00:29:49.427822902Z" level=info msg="CreateContainer within sandbox \"5894922958e9360652a4423eda767ac2b658cb5f3602ba6f3bb2a5dbcb796ef1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"44ec8fc1ec0616eccc25bf2d587dc074474bbe5d55e0bbee65ef234d63725444\"" Jul 10 00:29:49.429739 containerd[1551]: time="2025-07-10T00:29:49.428520740Z" level=info msg="StartContainer for \"44ec8fc1ec0616eccc25bf2d587dc074474bbe5d55e0bbee65ef234d63725444\"" Jul 10 00:29:49.511724 containerd[1551]: time="2025-07-10T00:29:49.511681901Z" level=info msg="StartContainer for \"44ec8fc1ec0616eccc25bf2d587dc074474bbe5d55e0bbee65ef234d63725444\" returns successfully" Jul 10 00:29:49.735811 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 00:29:49.735919 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 00:29:49.852230 containerd[1551]: time="2025-07-10T00:29:49.852044879Z" level=info msg="StopPodSandbox for \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\"" Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:49.968 [INFO][3887] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:49.969 [INFO][3887] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" iface="eth0" netns="/var/run/netns/cni-576f562b-9247-1cca-0172-685cf8c87ae2" Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:49.969 [INFO][3887] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" iface="eth0" netns="/var/run/netns/cni-576f562b-9247-1cca-0172-685cf8c87ae2" Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:49.971 [INFO][3887] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" iface="eth0" netns="/var/run/netns/cni-576f562b-9247-1cca-0172-685cf8c87ae2" Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:49.971 [INFO][3887] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:49.973 [INFO][3887] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:50.081 [INFO][3899] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" HandleID="k8s-pod-network.0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Workload="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:50.081 [INFO][3899] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:50.081 [INFO][3899] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:50.093 [WARNING][3899] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" HandleID="k8s-pod-network.0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Workload="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:50.093 [INFO][3899] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" HandleID="k8s-pod-network.0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Workload="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:50.094 [INFO][3899] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:29:50.098597 containerd[1551]: 2025-07-10 00:29:50.096 [INFO][3887] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:29:50.102136 containerd[1551]: time="2025-07-10T00:29:50.101981617Z" level=info msg="TearDown network for sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\" successfully" Jul 10 00:29:50.102136 containerd[1551]: time="2025-07-10T00:29:50.102015136Z" level=info msg="StopPodSandbox for \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\" returns successfully" Jul 10 00:29:50.103012 systemd[1]: run-netns-cni\x2d576f562b\x2d9247\x2d1cca\x2d0172\x2d685cf8c87ae2.mount: Deactivated successfully. Jul 10 00:29:50.122887 kubelet[2626]: I0710 00:29:50.122817 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4tg4l" podStartSLOduration=1.71596503 podStartE2EDuration="12.12279684s" podCreationTimestamp="2025-07-10 00:29:38 +0000 UTC" firstStartedPulling="2025-07-10 00:29:38.953725646 +0000 UTC m=+20.110335069" lastFinishedPulling="2025-07-10 00:29:49.360557456 +0000 UTC m=+30.517166879" observedRunningTime="2025-07-10 00:29:50.122200562 +0000 UTC m=+31.278809985" watchObservedRunningTime="2025-07-10 00:29:50.12279684 +0000 UTC m=+31.279406263" Jul 10 00:29:50.282912 kubelet[2626]: I0710 00:29:50.282859 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-whisker-backend-key-pair\") pod \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\" (UID: \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\") " Jul 10 00:29:50.282912 kubelet[2626]: I0710 00:29:50.282922 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shnt6\" (UniqueName: \"kubernetes.io/projected/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-kube-api-access-shnt6\") pod \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\" (UID: \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\") " Jul 10 00:29:50.283069 kubelet[2626]: I0710 00:29:50.282955 2626 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-whisker-ca-bundle\") pod \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\" (UID: \"caebe2b5-f7de-49d7-9d6f-4c1c8ce15005\") " Jul 10 00:29:50.284404 kubelet[2626]: I0710 00:29:50.283995 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "caebe2b5-f7de-49d7-9d6f-4c1c8ce15005" (UID: "caebe2b5-f7de-49d7-9d6f-4c1c8ce15005"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 00:29:50.289912 systemd[1]: var-lib-kubelet-pods-caebe2b5\x2df7de\x2d49d7\x2d9d6f\x2d4c1c8ce15005-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dshnt6.mount: Deactivated successfully. Jul 10 00:29:50.293607 kubelet[2626]: I0710 00:29:50.291657 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-kube-api-access-shnt6" (OuterVolumeSpecName: "kube-api-access-shnt6") pod "caebe2b5-f7de-49d7-9d6f-4c1c8ce15005" (UID: "caebe2b5-f7de-49d7-9d6f-4c1c8ce15005"). InnerVolumeSpecName "kube-api-access-shnt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 00:29:50.297994 kubelet[2626]: I0710 00:29:50.297942 2626 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "caebe2b5-f7de-49d7-9d6f-4c1c8ce15005" (UID: "caebe2b5-f7de-49d7-9d6f-4c1c8ce15005"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 00:29:50.299876 systemd[1]: var-lib-kubelet-pods-caebe2b5\x2df7de\x2d49d7\x2d9d6f\x2d4c1c8ce15005-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 00:29:50.383689 kubelet[2626]: I0710 00:29:50.383565 2626 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shnt6\" (UniqueName: \"kubernetes.io/projected/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-kube-api-access-shnt6\") on node \"localhost\" DevicePath \"\"" Jul 10 00:29:50.383689 kubelet[2626]: I0710 00:29:50.383605 2626 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 00:29:50.383689 kubelet[2626]: I0710 00:29:50.383616 2626 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 00:29:51.291362 kubelet[2626]: I0710 00:29:51.291310 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2w9k\" (UniqueName: \"kubernetes.io/projected/a2228e72-bd31-4bc3-9d80-9fd896160ca4-kube-api-access-q2w9k\") pod \"whisker-64494495c8-tt2hk\" (UID: \"a2228e72-bd31-4bc3-9d80-9fd896160ca4\") " pod="calico-system/whisker-64494495c8-tt2hk" Jul 10 00:29:51.291846 kubelet[2626]: I0710 00:29:51.291399 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2228e72-bd31-4bc3-9d80-9fd896160ca4-whisker-backend-key-pair\") pod \"whisker-64494495c8-tt2hk\" (UID: \"a2228e72-bd31-4bc3-9d80-9fd896160ca4\") " pod="calico-system/whisker-64494495c8-tt2hk" Jul 10 00:29:51.291846 kubelet[2626]: I0710 00:29:51.291423 2626 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2228e72-bd31-4bc3-9d80-9fd896160ca4-whisker-ca-bundle\") pod \"whisker-64494495c8-tt2hk\" (UID: \"a2228e72-bd31-4bc3-9d80-9fd896160ca4\") " pod="calico-system/whisker-64494495c8-tt2hk" Jul 10 00:29:51.506633 containerd[1551]: time="2025-07-10T00:29:51.506585945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64494495c8-tt2hk,Uid:a2228e72-bd31-4bc3-9d80-9fd896160ca4,Namespace:calico-system,Attempt:0,}" Jul 10 00:29:51.636717 systemd-networkd[1229]: caliab71def5ccc: Link UP Jul 10 00:29:51.636907 systemd-networkd[1229]: caliab71def5ccc: Gained carrier Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.543 [INFO][4090] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.559 [INFO][4090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--64494495c8--tt2hk-eth0 whisker-64494495c8- calico-system a2228e72-bd31-4bc3-9d80-9fd896160ca4 883 0 2025-07-10 00:29:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:64494495c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-64494495c8-tt2hk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliab71def5ccc [] [] }} ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Namespace="calico-system" Pod="whisker-64494495c8-tt2hk" WorkloadEndpoint="localhost-k8s-whisker--64494495c8--tt2hk-" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.559 [INFO][4090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Namespace="calico-system" Pod="whisker-64494495c8-tt2hk" WorkloadEndpoint="localhost-k8s-whisker--64494495c8--tt2hk-eth0" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.585 [INFO][4105] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" HandleID="k8s-pod-network.0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Workload="localhost-k8s-whisker--64494495c8--tt2hk-eth0" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.585 [INFO][4105] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" HandleID="k8s-pod-network.0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Workload="localhost-k8s-whisker--64494495c8--tt2hk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3100), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-64494495c8-tt2hk", "timestamp":"2025-07-10 00:29:51.585272945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.585 [INFO][4105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.586 [INFO][4105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.586 [INFO][4105] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.596 [INFO][4105] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" host="localhost" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.605 [INFO][4105] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.610 [INFO][4105] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.612 [INFO][4105] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.614 [INFO][4105] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.614 [INFO][4105] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" host="localhost" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.615 [INFO][4105] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40 Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.619 [INFO][4105] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" host="localhost" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.624 [INFO][4105] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" host="localhost" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.624 [INFO][4105] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" host="localhost" Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.624 [INFO][4105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:29:51.656196 containerd[1551]: 2025-07-10 00:29:51.624 [INFO][4105] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" HandleID="k8s-pod-network.0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Workload="localhost-k8s-whisker--64494495c8--tt2hk-eth0" Jul 10 00:29:51.656759 containerd[1551]: 2025-07-10 00:29:51.626 [INFO][4090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Namespace="calico-system" Pod="whisker-64494495c8-tt2hk" WorkloadEndpoint="localhost-k8s-whisker--64494495c8--tt2hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64494495c8--tt2hk-eth0", GenerateName:"whisker-64494495c8-", Namespace:"calico-system", SelfLink:"", UID:"a2228e72-bd31-4bc3-9d80-9fd896160ca4", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64494495c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-64494495c8-tt2hk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliab71def5ccc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:29:51.656759 containerd[1551]: 2025-07-10 00:29:51.626 [INFO][4090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Namespace="calico-system" Pod="whisker-64494495c8-tt2hk" WorkloadEndpoint="localhost-k8s-whisker--64494495c8--tt2hk-eth0" Jul 10 00:29:51.656759 containerd[1551]: 2025-07-10 00:29:51.626 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab71def5ccc ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Namespace="calico-system" Pod="whisker-64494495c8-tt2hk" WorkloadEndpoint="localhost-k8s-whisker--64494495c8--tt2hk-eth0" Jul 10 00:29:51.656759 containerd[1551]: 2025-07-10 00:29:51.643 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Namespace="calico-system" Pod="whisker-64494495c8-tt2hk" WorkloadEndpoint="localhost-k8s-whisker--64494495c8--tt2hk-eth0" Jul 10 00:29:51.656759 containerd[1551]: 2025-07-10 00:29:51.643 [INFO][4090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Namespace="calico-system" Pod="whisker-64494495c8-tt2hk" WorkloadEndpoint="localhost-k8s-whisker--64494495c8--tt2hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--64494495c8--tt2hk-eth0", GenerateName:"whisker-64494495c8-", Namespace:"calico-system", SelfLink:"", UID:"a2228e72-bd31-4bc3-9d80-9fd896160ca4", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"64494495c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40", Pod:"whisker-64494495c8-tt2hk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliab71def5ccc", MAC:"f2:33:1d:ac:74:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:29:51.656759 containerd[1551]: 2025-07-10 00:29:51.651 [INFO][4090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40" Namespace="calico-system" Pod="whisker-64494495c8-tt2hk" WorkloadEndpoint="localhost-k8s-whisker--64494495c8--tt2hk-eth0" Jul 10 00:29:51.672160 containerd[1551]: time="2025-07-10T00:29:51.671743446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:51.672160 containerd[1551]: time="2025-07-10T00:29:51.672132725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:51.672160 containerd[1551]: time="2025-07-10T00:29:51.672146405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:51.672425 containerd[1551]: time="2025-07-10T00:29:51.672231685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:51.693823 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:29:51.724427 containerd[1551]: time="2025-07-10T00:29:51.724353433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64494495c8-tt2hk,Uid:a2228e72-bd31-4bc3-9d80-9fd896160ca4,Namespace:calico-system,Attempt:0,} returns sandbox id \"0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40\"" Jul 10 00:29:51.728638 containerd[1551]: time="2025-07-10T00:29:51.728611222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 00:29:52.925029 kubelet[2626]: I0710 00:29:52.924989 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="caebe2b5-f7de-49d7-9d6f-4c1c8ce15005" path="/var/lib/kubelet/pods/caebe2b5-f7de-49d7-9d6f-4c1c8ce15005/volumes" Jul 10 00:29:52.954591 containerd[1551]: time="2025-07-10T00:29:52.954535986Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:52.955245 containerd[1551]: time="2025-07-10T00:29:52.955219064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 10 00:29:52.956737 containerd[1551]: time="2025-07-10T00:29:52.956706141Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:52.958934 containerd[1551]: time="2025-07-10T00:29:52.958845536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:52.960672 containerd[1551]: time="2025-07-10T00:29:52.960533212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.23188851s" Jul 10 00:29:52.960672 containerd[1551]: time="2025-07-10T00:29:52.960567572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 10 00:29:52.963132 containerd[1551]: time="2025-07-10T00:29:52.963019926Z" level=info msg="CreateContainer within sandbox \"0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 00:29:52.975995 containerd[1551]: time="2025-07-10T00:29:52.975922815Z" level=info msg="CreateContainer within sandbox \"0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3eeb28c750add35e960f32b01026f6199a96f142943733224c81ac922674f436\"" Jul 10 00:29:52.976667 containerd[1551]: time="2025-07-10T00:29:52.976402014Z" level=info msg="StartContainer for \"3eeb28c750add35e960f32b01026f6199a96f142943733224c81ac922674f436\"" Jul 10 00:29:53.046097 containerd[1551]: time="2025-07-10T00:29:53.046022575Z" level=info msg="StartContainer for \"3eeb28c750add35e960f32b01026f6199a96f142943733224c81ac922674f436\" returns successfully" Jul 10 00:29:53.047646 containerd[1551]: time="2025-07-10T00:29:53.047278813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 00:29:53.327072 systemd-networkd[1229]: caliab71def5ccc: Gained IPv6LL Jul 10 00:29:54.591379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758130695.mount: Deactivated successfully. Jul 10 00:29:54.605216 containerd[1551]: time="2025-07-10T00:29:54.605170946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:54.605803 containerd[1551]: time="2025-07-10T00:29:54.605770864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 10 00:29:54.607116 containerd[1551]: time="2025-07-10T00:29:54.607073622Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:54.609442 containerd[1551]: time="2025-07-10T00:29:54.609407897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:29:54.610375 containerd[1551]: time="2025-07-10T00:29:54.610279975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.562962883s" Jul 10 00:29:54.610375 containerd[1551]: time="2025-07-10T00:29:54.610313535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 10 00:29:54.613487 containerd[1551]: time="2025-07-10T00:29:54.613454648Z" level=info msg="CreateContainer within sandbox \"0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 00:29:54.625094 containerd[1551]: time="2025-07-10T00:29:54.624871865Z" level=info msg="CreateContainer within sandbox \"0fdd9ac7c32daba37fab5a22dbf28f5175789be2c0d728cdbb543c592a67cc40\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"640c389c849caaabf970f03d378b2e11eb2d5edbc75ab045ac2833ed71ba39c1\"" Jul 10 00:29:54.626466 containerd[1551]: time="2025-07-10T00:29:54.625421303Z" level=info msg="StartContainer for \"640c389c849caaabf970f03d378b2e11eb2d5edbc75ab045ac2833ed71ba39c1\"" Jul 10 00:29:54.681960 containerd[1551]: time="2025-07-10T00:29:54.681843026Z" level=info msg="StartContainer for \"640c389c849caaabf970f03d378b2e11eb2d5edbc75ab045ac2833ed71ba39c1\" returns successfully" Jul 10 00:29:57.922953 containerd[1551]: time="2025-07-10T00:29:57.922908059Z" level=info msg="StopPodSandbox for \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\"" Jul 10 00:29:57.974035 kubelet[2626]: I0710 00:29:57.973606 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-64494495c8-tt2hk" podStartSLOduration=4.090412342 podStartE2EDuration="6.973587692s" podCreationTimestamp="2025-07-10 00:29:51 +0000 UTC" firstStartedPulling="2025-07-10 00:29:51.728300263 +0000 UTC m=+32.884909686" lastFinishedPulling="2025-07-10 00:29:54.611475613 +0000 UTC m=+35.768085036" observedRunningTime="2025-07-10 00:29:55.157798132 +0000 UTC m=+36.314407555" watchObservedRunningTime="2025-07-10 00:29:57.973587692 +0000 UTC m=+39.130197075" Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:57.974 [INFO][4436] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:57.975 [INFO][4436] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" iface="eth0" netns="/var/run/netns/cni-8049925b-0763-58b6-f1c9-7dfff987eaf2" Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:57.975 [INFO][4436] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" iface="eth0" netns="/var/run/netns/cni-8049925b-0763-58b6-f1c9-7dfff987eaf2" Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:57.975 [INFO][4436] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" iface="eth0" netns="/var/run/netns/cni-8049925b-0763-58b6-f1c9-7dfff987eaf2" Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:57.975 [INFO][4436] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:57.975 [INFO][4436] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:57.997 [INFO][4445] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" HandleID="k8s-pod-network.5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:57.997 [INFO][4445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:57.997 [INFO][4445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:58.006 [WARNING][4445] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" HandleID="k8s-pod-network.5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:58.006 [INFO][4445] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" HandleID="k8s-pod-network.5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:58.007 [INFO][4445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:29:58.011446 containerd[1551]: 2025-07-10 00:29:58.009 [INFO][4436] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:29:58.011812 containerd[1551]: time="2025-07-10T00:29:58.011632988Z" level=info msg="TearDown network for sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\" successfully" Jul 10 00:29:58.011812 containerd[1551]: time="2025-07-10T00:29:58.011659268Z" level=info msg="StopPodSandbox for \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\" returns successfully" Jul 10 00:29:58.014070 systemd[1]: run-netns-cni\x2d8049925b\x2d0763\x2d58b6\x2df1c9\x2d7dfff987eaf2.mount: Deactivated successfully. Jul 10 00:29:58.016419 containerd[1551]: time="2025-07-10T00:29:58.015913661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465fcf7d-ksj25,Uid:0b394ef9-6acd-4661-b521-8820f934f5ed,Namespace:calico-apiserver,Attempt:1,}" Jul 10 00:29:58.152668 systemd-networkd[1229]: calicbbe9d56bb3: Link UP Jul 10 00:29:58.152869 systemd-networkd[1229]: calicbbe9d56bb3: Gained carrier Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.048 [INFO][4455] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.062 [INFO][4455] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0 calico-apiserver-5d465fcf7d- calico-apiserver 0b394ef9-6acd-4661-b521-8820f934f5ed 918 0 2025-07-10 00:29:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d465fcf7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d465fcf7d-ksj25 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicbbe9d56bb3 [] [] }} ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-ksj25" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.062 [INFO][4455] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-ksj25" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.087 [INFO][4469] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" HandleID="k8s-pod-network.6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.088 [INFO][4469] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" HandleID="k8s-pod-network.6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a7cd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d465fcf7d-ksj25", "timestamp":"2025-07-10 00:29:58.087942704 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.088 [INFO][4469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.088 [INFO][4469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.088 [INFO][4469] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.097 [INFO][4469] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" host="localhost" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.106 [INFO][4469] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.110 [INFO][4469] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.112 [INFO][4469] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.113 [INFO][4469] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.114 [INFO][4469] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" host="localhost" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.115 [INFO][4469] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.131 [INFO][4469] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" host="localhost" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.146 [INFO][4469] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" host="localhost" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.146 [INFO][4469] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" host="localhost" Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.146 [INFO][4469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:29:58.165517 containerd[1551]: 2025-07-10 00:29:58.146 [INFO][4469] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" HandleID="k8s-pod-network.6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:29:58.166230 containerd[1551]: 2025-07-10 00:29:58.149 [INFO][4455] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-ksj25" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0", GenerateName:"calico-apiserver-5d465fcf7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b394ef9-6acd-4661-b521-8820f934f5ed", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465fcf7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d465fcf7d-ksj25", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicbbe9d56bb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:29:58.166230 containerd[1551]: 2025-07-10 00:29:58.149 [INFO][4455] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-ksj25" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:29:58.166230 containerd[1551]: 2025-07-10 00:29:58.149 [INFO][4455] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbbe9d56bb3 ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-ksj25" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:29:58.166230 containerd[1551]: 2025-07-10 00:29:58.152 [INFO][4455] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-ksj25" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:29:58.166230 containerd[1551]: 2025-07-10 00:29:58.153 [INFO][4455] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-ksj25" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0", GenerateName:"calico-apiserver-5d465fcf7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b394ef9-6acd-4661-b521-8820f934f5ed", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465fcf7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f", Pod:"calico-apiserver-5d465fcf7d-ksj25", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicbbe9d56bb3", MAC:"72:19:be:9d:e0:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:29:58.166230 containerd[1551]: 2025-07-10 00:29:58.163 [INFO][4455] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-ksj25" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:29:58.179583 containerd[1551]: time="2025-07-10T00:29:58.179426557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:58.181885 containerd[1551]: time="2025-07-10T00:29:58.181837473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:58.181885 containerd[1551]: time="2025-07-10T00:29:58.181862513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:58.181985 containerd[1551]: time="2025-07-10T00:29:58.181955153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:58.215637 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:29:58.232936 containerd[1551]: time="2025-07-10T00:29:58.232896591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465fcf7d-ksj25,Uid:0b394ef9-6acd-4661-b521-8820f934f5ed,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f\"" Jul 10 00:29:58.235256 containerd[1551]: time="2025-07-10T00:29:58.234444348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:29:58.923439 containerd[1551]: time="2025-07-10T00:29:58.923401717Z" level=info msg="StopPodSandbox for \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\"" Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.965 [INFO][4561] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.965 [INFO][4561] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" iface="eth0" netns="/var/run/netns/cni-1c4c88c1-8400-b375-341e-0e2a0119b17a" Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.965 [INFO][4561] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" iface="eth0" netns="/var/run/netns/cni-1c4c88c1-8400-b375-341e-0e2a0119b17a" Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.966 [INFO][4561] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" iface="eth0" netns="/var/run/netns/cni-1c4c88c1-8400-b375-341e-0e2a0119b17a" Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.966 [INFO][4561] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.966 [INFO][4561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.984 [INFO][4569] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" HandleID="k8s-pod-network.cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.984 [INFO][4569] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.984 [INFO][4569] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.993 [WARNING][4569] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" HandleID="k8s-pod-network.cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.993 [INFO][4569] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" HandleID="k8s-pod-network.cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.994 [INFO][4569] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:29:58.998052 containerd[1551]: 2025-07-10 00:29:58.996 [INFO][4561] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:29:58.998568 containerd[1551]: time="2025-07-10T00:29:58.998192476Z" level=info msg="TearDown network for sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\" successfully" Jul 10 00:29:58.998568 containerd[1551]: time="2025-07-10T00:29:58.998218276Z" level=info msg="StopPodSandbox for \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\" returns successfully" Jul 10 00:29:58.998909 kubelet[2626]: E0710 00:29:58.998701 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:58.999350 containerd[1551]: time="2025-07-10T00:29:58.999205874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d987s,Uid:3a625459-29c1-438f-ae9d-de10e2e06fa6,Namespace:kube-system,Attempt:1,}" Jul 10 00:29:59.016835 systemd[1]: run-netns-cni\x2d1c4c88c1\x2d8400\x2db375\x2d341e\x2d0e2a0119b17a.mount: Deactivated successfully. Jul 10 00:29:59.118670 systemd-networkd[1229]: calidd9b7ba489e: Link UP Jul 10 00:29:59.119866 systemd-networkd[1229]: calidd9b7ba489e: Gained carrier Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.031 [INFO][4576] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.045 [INFO][4576] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--d987s-eth0 coredns-7c65d6cfc9- kube-system 3a625459-29c1-438f-ae9d-de10e2e06fa6 926 0 2025-07-10 00:29:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-d987s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidd9b7ba489e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d987s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d987s-" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.045 [INFO][4576] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d987s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.069 [INFO][4590] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" HandleID="k8s-pod-network.ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.069 [INFO][4590] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" HandleID="k8s-pod-network.ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136640), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-d987s", "timestamp":"2025-07-10 00:29:59.069746848 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.069 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.069 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.070 [INFO][4590] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.079 [INFO][4590] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" host="localhost" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.083 [INFO][4590] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.088 [INFO][4590] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.091 [INFO][4590] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.094 [INFO][4590] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.094 [INFO][4590] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" host="localhost" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.096 [INFO][4590] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.102 [INFO][4590] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" host="localhost" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.112 [INFO][4590] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" host="localhost" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.112 [INFO][4590] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" host="localhost" Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.112 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:29:59.143232 containerd[1551]: 2025-07-10 00:29:59.112 [INFO][4590] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" HandleID="k8s-pod-network.ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:29:59.143776 containerd[1551]: 2025-07-10 00:29:59.116 [INFO][4576] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d987s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--d987s-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3a625459-29c1-438f-ae9d-de10e2e06fa6", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-d987s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd9b7ba489e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:29:59.143776 containerd[1551]: 2025-07-10 00:29:59.116 [INFO][4576] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d987s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:29:59.143776 containerd[1551]: 2025-07-10 00:29:59.116 [INFO][4576] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd9b7ba489e ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d987s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:29:59.143776 containerd[1551]: 2025-07-10 00:29:59.120 [INFO][4576] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d987s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:29:59.143776 containerd[1551]: 2025-07-10 00:29:59.120 [INFO][4576] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d987s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--d987s-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3a625459-29c1-438f-ae9d-de10e2e06fa6", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa", Pod:"coredns-7c65d6cfc9-d987s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd9b7ba489e", MAC:"12:4a:09:59:b9:19", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:29:59.143776 containerd[1551]: 2025-07-10 00:29:59.140 [INFO][4576] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa" Namespace="kube-system" Pod="coredns-7c65d6cfc9-d987s" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:29:59.177073 containerd[1551]: time="2025-07-10T00:29:59.174625129Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:29:59.177073 containerd[1551]: time="2025-07-10T00:29:59.174679969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:29:59.177073 containerd[1551]: time="2025-07-10T00:29:59.174695369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:59.177073 containerd[1551]: time="2025-07-10T00:29:59.175739967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:29:59.199810 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:29:59.215897 containerd[1551]: time="2025-07-10T00:29:59.215858267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-d987s,Uid:3a625459-29c1-438f-ae9d-de10e2e06fa6,Namespace:kube-system,Attempt:1,} returns sandbox id \"ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa\"" Jul 10 00:29:59.216468 kubelet[2626]: E0710 00:29:59.216444 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:29:59.220586 containerd[1551]: time="2025-07-10T00:29:59.220552419Z" level=info msg="CreateContainer within sandbox \"ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:29:59.244863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277563356.mount: Deactivated successfully. Jul 10 00:29:59.246233 containerd[1551]: time="2025-07-10T00:29:59.246195741Z" level=info msg="CreateContainer within sandbox \"ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25eb895ac48e4e6848b8d2dd777d1c8d5cf0dd9b629a090bac00cfb861b275d5\"" Jul 10 00:29:59.247611 containerd[1551]: time="2025-07-10T00:29:59.247413059Z" level=info msg="StartContainer for \"25eb895ac48e4e6848b8d2dd777d1c8d5cf0dd9b629a090bac00cfb861b275d5\"" Jul 10 00:29:59.291495 containerd[1551]: time="2025-07-10T00:29:59.291447832Z" level=info msg="StartContainer for \"25eb895ac48e4e6848b8d2dd777d1c8d5cf0dd9b629a090bac00cfb861b275d5\" returns successfully" Jul 10 00:29:59.851843 systemd-networkd[1229]: calicbbe9d56bb3: Gained IPv6LL Jul 10 00:29:59.923772 containerd[1551]: time="2025-07-10T00:29:59.923656196Z" level=info msg="StopPodSandbox for \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\"" Jul 10 00:29:59.923772 containerd[1551]: time="2025-07-10T00:29:59.923737636Z" level=info msg="StopPodSandbox for \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\"" Jul 10 00:29:59.928400 containerd[1551]: time="2025-07-10T00:29:59.927742230Z" level=info msg="StopPodSandbox for \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\"" Jul 10 00:29:59.977053 systemd[1]: Started sshd@7-10.0.0.65:22-10.0.0.1:57344.service - OpenSSH per-connection server daemon (10.0.0.1:57344). Jul 10 00:30:00.017206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount503746897.mount: Deactivated successfully. Jul 10 00:30:00.037517 sshd[4765]: Accepted publickey for core from 10.0.0.1 port 57344 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:00.039598 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:00.049902 systemd-logind[1524]: New session 8 of user core. Jul 10 00:30:00.054636 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:29:59.982 [INFO][4735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:29:59.982 [INFO][4735] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" iface="eth0" netns="/var/run/netns/cni-3c078022-b269-8ad4-aa47-46cc31a0b12c" Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:29:59.995 [INFO][4735] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" iface="eth0" netns="/var/run/netns/cni-3c078022-b269-8ad4-aa47-46cc31a0b12c" Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:29:59.995 [INFO][4735] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" iface="eth0" netns="/var/run/netns/cni-3c078022-b269-8ad4-aa47-46cc31a0b12c" Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:29:59.995 [INFO][4735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:29:59.995 [INFO][4735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:30:00.038 [INFO][4770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" HandleID="k8s-pod-network.f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:30:00.038 [INFO][4770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:30:00.038 [INFO][4770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:30:00.049 [WARNING][4770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" HandleID="k8s-pod-network.f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:30:00.049 [INFO][4770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" HandleID="k8s-pod-network.f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:30:00.054 [INFO][4770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:00.063024 containerd[1551]: 2025-07-10 00:30:00.060 [INFO][4735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:00.064670 containerd[1551]: time="2025-07-10T00:30:00.064448669Z" level=info msg="TearDown network for sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\" successfully" Jul 10 00:30:00.064670 containerd[1551]: time="2025-07-10T00:30:00.064487869Z" level=info msg="StopPodSandbox for \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\" returns successfully" Jul 10 00:30:00.065336 containerd[1551]: time="2025-07-10T00:30:00.065289268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465fcf7d-bpp6r,Uid:09a904e3-2a27-4aa7-afe0-ae11924a0f3d,Namespace:calico-apiserver,Attempt:1,}" Jul 10 00:30:00.066950 systemd[1]: run-netns-cni\x2d3c078022\x2db269\x2d8ad4\x2daa47\x2d46cc31a0b12c.mount: Deactivated successfully. Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.005 [INFO][4750] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.005 [INFO][4750] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" iface="eth0" netns="/var/run/netns/cni-5874498b-8112-b7e2-620b-812f62e5fd55" Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.006 [INFO][4750] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" iface="eth0" netns="/var/run/netns/cni-5874498b-8112-b7e2-620b-812f62e5fd55" Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.006 [INFO][4750] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" iface="eth0" netns="/var/run/netns/cni-5874498b-8112-b7e2-620b-812f62e5fd55" Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.006 [INFO][4750] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.006 [INFO][4750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.044 [INFO][4776] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" HandleID="k8s-pod-network.15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.044 [INFO][4776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.054 [INFO][4776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.075 [WARNING][4776] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" HandleID="k8s-pod-network.15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.076 [INFO][4776] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" HandleID="k8s-pod-network.15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.079 [INFO][4776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:00.095303 containerd[1551]: 2025-07-10 00:30:00.089 [INFO][4750] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:00.097812 systemd[1]: run-netns-cni\x2d5874498b\x2d8112\x2db7e2\x2d620b\x2d812f62e5fd55.mount: Deactivated successfully. Jul 10 00:30:00.098289 containerd[1551]: time="2025-07-10T00:30:00.097904022Z" level=info msg="TearDown network for sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\" successfully" Jul 10 00:30:00.098289 containerd[1551]: time="2025-07-10T00:30:00.097934782Z" level=info msg="StopPodSandbox for \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\" returns successfully" Jul 10 00:30:00.098389 kubelet[2626]: E0710 00:30:00.098271 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:00.100078 containerd[1551]: time="2025-07-10T00:30:00.099886579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wxx7d,Uid:3856e87c-f471-4d8a-8a66-b6670b2d88cd,Namespace:kube-system,Attempt:1,}" Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.021 [INFO][4737] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.024 [INFO][4737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" iface="eth0" netns="/var/run/netns/cni-31e3cfa3-3292-b46c-7691-4ef9294e8616" Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.025 [INFO][4737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" iface="eth0" netns="/var/run/netns/cni-31e3cfa3-3292-b46c-7691-4ef9294e8616" Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.027 [INFO][4737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" iface="eth0" netns="/var/run/netns/cni-31e3cfa3-3292-b46c-7691-4ef9294e8616" Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.028 [INFO][4737] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.028 [INFO][4737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.074 [INFO][4785] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" HandleID="k8s-pod-network.d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.074 [INFO][4785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.079 [INFO][4785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.092 [WARNING][4785] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" HandleID="k8s-pod-network.d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.092 [INFO][4785] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" HandleID="k8s-pod-network.d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.094 [INFO][4785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:00.106009 containerd[1551]: 2025-07-10 00:30:00.100 [INFO][4737] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:00.108171 containerd[1551]: time="2025-07-10T00:30:00.108052087Z" level=info msg="TearDown network for sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\" successfully" Jul 10 00:30:00.108171 containerd[1551]: time="2025-07-10T00:30:00.108166687Z" level=info msg="StopPodSandbox for \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\" returns successfully" Jul 10 00:30:00.109261 containerd[1551]: time="2025-07-10T00:30:00.109218406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqs9d,Uid:3ab91194-b6c2-41a0-9cec-3c4e398dcbbf,Namespace:calico-system,Attempt:1,}" Jul 10 00:30:00.169747 kubelet[2626]: E0710 00:30:00.169710 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:00.220019 kubelet[2626]: I0710 00:30:00.219550 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-d987s" podStartSLOduration=35.219526169 podStartE2EDuration="35.219526169s" podCreationTimestamp="2025-07-10 00:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:00.188847293 +0000 UTC m=+41.345456716" watchObservedRunningTime="2025-07-10 00:30:00.219526169 +0000 UTC m=+41.376135632" Jul 10 00:30:00.288559 systemd-networkd[1229]: cali076b5175f7c: Link UP Jul 10 00:30:00.288775 systemd-networkd[1229]: cali076b5175f7c: Gained carrier Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.148 [INFO][4798] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.173 [INFO][4798] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0 calico-apiserver-5d465fcf7d- calico-apiserver 09a904e3-2a27-4aa7-afe0-ae11924a0f3d 975 0 2025-07-10 00:29:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d465fcf7d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5d465fcf7d-bpp6r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali076b5175f7c [] [] }} ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-bpp6r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.173 [INFO][4798] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-bpp6r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.228 [INFO][4858] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" HandleID="k8s-pod-network.5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.230 [INFO][4858] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" HandleID="k8s-pod-network.5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5d465fcf7d-bpp6r", "timestamp":"2025-07-10 00:30:00.228209357 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.230 [INFO][4858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.230 [INFO][4858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.230 [INFO][4858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.246 [INFO][4858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" host="localhost" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.252 [INFO][4858] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.257 [INFO][4858] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.262 [INFO][4858] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.265 [INFO][4858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.265 [INFO][4858] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" host="localhost" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.266 [INFO][4858] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.270 [INFO][4858] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" host="localhost" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.280 [INFO][4858] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" host="localhost" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.280 [INFO][4858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" host="localhost" Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.280 [INFO][4858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:00.320753 containerd[1551]: 2025-07-10 00:30:00.280 [INFO][4858] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" HandleID="k8s-pod-network.5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:00.322012 containerd[1551]: 2025-07-10 00:30:00.284 [INFO][4798] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-bpp6r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0", GenerateName:"calico-apiserver-5d465fcf7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"09a904e3-2a27-4aa7-afe0-ae11924a0f3d", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465fcf7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5d465fcf7d-bpp6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali076b5175f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:00.322012 containerd[1551]: 2025-07-10 00:30:00.284 [INFO][4798] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-bpp6r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:00.322012 containerd[1551]: 2025-07-10 00:30:00.284 [INFO][4798] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali076b5175f7c ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-bpp6r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:00.322012 containerd[1551]: 2025-07-10 00:30:00.293 [INFO][4798] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-bpp6r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:00.322012 containerd[1551]: 2025-07-10 00:30:00.301 [INFO][4798] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-bpp6r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0", GenerateName:"calico-apiserver-5d465fcf7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"09a904e3-2a27-4aa7-afe0-ae11924a0f3d", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465fcf7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e", Pod:"calico-apiserver-5d465fcf7d-bpp6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali076b5175f7c", MAC:"7e:85:34:c3:41:61", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:00.322012 containerd[1551]: 2025-07-10 00:30:00.317 [INFO][4798] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e" Namespace="calico-apiserver" Pod="calico-apiserver-5d465fcf7d-bpp6r" WorkloadEndpoint="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:00.369749 containerd[1551]: time="2025-07-10T00:30:00.365584362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:00.369749 containerd[1551]: time="2025-07-10T00:30:00.369414277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:00.369749 containerd[1551]: time="2025-07-10T00:30:00.369426677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:00.370531 containerd[1551]: time="2025-07-10T00:30:00.370429755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:00.411468 systemd-networkd[1229]: cali1278d42898b: Link UP Jul 10 00:30:00.411671 systemd-networkd[1229]: cali1278d42898b: Gained carrier Jul 10 00:30:00.425139 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.149 [INFO][4815] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.170 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0 coredns-7c65d6cfc9- kube-system 3856e87c-f471-4d8a-8a66-b6670b2d88cd 976 0 2025-07-10 00:29:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-wxx7d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1278d42898b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wxx7d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wxx7d-" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.170 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wxx7d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.246 [INFO][4851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" HandleID="k8s-pod-network.625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.247 [INFO][4851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" HandleID="k8s-pod-network.625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-wxx7d", "timestamp":"2025-07-10 00:30:00.244132374 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.248 [INFO][4851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.280 [INFO][4851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.280 [INFO][4851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.348 [INFO][4851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" host="localhost" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.371 [INFO][4851] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.379 [INFO][4851] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.382 [INFO][4851] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.385 [INFO][4851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.385 [INFO][4851] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" host="localhost" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.388 [INFO][4851] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47 Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.395 [INFO][4851] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" host="localhost" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.403 [INFO][4851] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" host="localhost" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.403 [INFO][4851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" host="localhost" Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.403 [INFO][4851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:00.435241 containerd[1551]: 2025-07-10 00:30:00.403 [INFO][4851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" HandleID="k8s-pod-network.625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:00.435969 containerd[1551]: 2025-07-10 00:30:00.409 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wxx7d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3856e87c-f471-4d8a-8a66-b6670b2d88cd", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-wxx7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1278d42898b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:00.435969 containerd[1551]: 2025-07-10 00:30:00.409 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wxx7d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:00.435969 containerd[1551]: 2025-07-10 00:30:00.409 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1278d42898b ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wxx7d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:00.435969 containerd[1551]: 2025-07-10 00:30:00.412 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wxx7d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:00.435969 containerd[1551]: 2025-07-10 00:30:00.413 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wxx7d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3856e87c-f471-4d8a-8a66-b6670b2d88cd", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47", Pod:"coredns-7c65d6cfc9-wxx7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1278d42898b", MAC:"da:d2:a1:c0:af:f2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:00.435969 containerd[1551]: 2025-07-10 00:30:00.432 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wxx7d" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:00.459102 containerd[1551]: time="2025-07-10T00:30:00.456454393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:00.459102 containerd[1551]: time="2025-07-10T00:30:00.457633672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:00.459102 containerd[1551]: time="2025-07-10T00:30:00.457655392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:00.459102 containerd[1551]: time="2025-07-10T00:30:00.457774552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:00.469821 containerd[1551]: time="2025-07-10T00:30:00.469762375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d465fcf7d-bpp6r,Uid:09a904e3-2a27-4aa7-afe0-ae11924a0f3d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e\"" Jul 10 00:30:00.476934 sshd[4765]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:00.486202 systemd-logind[1524]: Session 8 logged out. Waiting for processes to exit. Jul 10 00:30:00.486611 systemd[1]: sshd@7-10.0.0.65:22-10.0.0.1:57344.service: Deactivated successfully. Jul 10 00:30:00.489737 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 00:30:00.490820 systemd-logind[1524]: Removed session 8. Jul 10 00:30:00.491876 systemd-networkd[1229]: calidd9b7ba489e: Gained IPv6LL Jul 10 00:30:00.500987 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:30:00.508849 systemd-networkd[1229]: cali8ca8ff2e3a8: Link UP Jul 10 00:30:00.509020 systemd-networkd[1229]: cali8ca8ff2e3a8: Gained carrier Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.181 [INFO][4832] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.221 [INFO][4832] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qqs9d-eth0 csi-node-driver- calico-system 3ab91194-b6c2-41a0-9cec-3c4e398dcbbf 977 0 2025-07-10 00:29:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qqs9d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8ca8ff2e3a8 [] [] }} ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Namespace="calico-system" Pod="csi-node-driver-qqs9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqs9d-" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.221 [INFO][4832] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Namespace="calico-system" Pod="csi-node-driver-qqs9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.307 [INFO][4873] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" HandleID="k8s-pod-network.a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.307 [INFO][4873] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" HandleID="k8s-pod-network.a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005af830), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qqs9d", "timestamp":"2025-07-10 00:30:00.307172085 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.307 [INFO][4873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.404 [INFO][4873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.405 [INFO][4873] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.450 [INFO][4873] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" host="localhost" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.464 [INFO][4873] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.478 [INFO][4873] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.482 [INFO][4873] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.485 [INFO][4873] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.485 [INFO][4873] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" host="localhost" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.491 [INFO][4873] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64 Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.496 [INFO][4873] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" host="localhost" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.502 [INFO][4873] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" host="localhost" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.503 [INFO][4873] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" host="localhost" Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.503 [INFO][4873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:00.524454 containerd[1551]: 2025-07-10 00:30:00.503 [INFO][4873] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" HandleID="k8s-pod-network.a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:00.525148 containerd[1551]: 2025-07-10 00:30:00.505 [INFO][4832] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Namespace="calico-system" Pod="csi-node-driver-qqs9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqs9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqs9d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qqs9d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ca8ff2e3a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:00.525148 containerd[1551]: 2025-07-10 00:30:00.505 [INFO][4832] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Namespace="calico-system" Pod="csi-node-driver-qqs9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:00.525148 containerd[1551]: 2025-07-10 00:30:00.505 [INFO][4832] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ca8ff2e3a8 ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Namespace="calico-system" Pod="csi-node-driver-qqs9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:00.525148 containerd[1551]: 2025-07-10 00:30:00.507 [INFO][4832] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Namespace="calico-system" Pod="csi-node-driver-qqs9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:00.525148 containerd[1551]: 2025-07-10 00:30:00.508 [INFO][4832] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Namespace="calico-system" Pod="csi-node-driver-qqs9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqs9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqs9d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64", Pod:"csi-node-driver-qqs9d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ca8ff2e3a8", MAC:"ca:d9:e3:4e:e5:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:00.525148 containerd[1551]: 2025-07-10 00:30:00.522 [INFO][4832] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64" Namespace="calico-system" Pod="csi-node-driver-qqs9d" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:00.546823 containerd[1551]: time="2025-07-10T00:30:00.546486946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:00.546823 containerd[1551]: time="2025-07-10T00:30:00.546544266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:00.546823 containerd[1551]: time="2025-07-10T00:30:00.546559866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:00.546823 containerd[1551]: time="2025-07-10T00:30:00.546637586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:00.557606 containerd[1551]: time="2025-07-10T00:30:00.557493650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wxx7d,Uid:3856e87c-f471-4d8a-8a66-b6670b2d88cd,Namespace:kube-system,Attempt:1,} returns sandbox id \"625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47\"" Jul 10 00:30:00.558388 kubelet[2626]: E0710 00:30:00.558314 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:00.562309 containerd[1551]: time="2025-07-10T00:30:00.562263043Z" level=info msg="CreateContainer within sandbox \"625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 00:30:00.573662 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:30:00.580507 containerd[1551]: time="2025-07-10T00:30:00.580459898Z" level=info msg="CreateContainer within sandbox \"625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8121ce5a13270db811286b1257d50b90ff40a94da9ebd95699a2c14319137b27\"" Jul 10 00:30:00.581262 containerd[1551]: time="2025-07-10T00:30:00.581029737Z" level=info msg="StartContainer for \"8121ce5a13270db811286b1257d50b90ff40a94da9ebd95699a2c14319137b27\"" Jul 10 00:30:00.590616 containerd[1551]: time="2025-07-10T00:30:00.590579963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqs9d,Uid:3ab91194-b6c2-41a0-9cec-3c4e398dcbbf,Namespace:calico-system,Attempt:1,} returns sandbox id \"a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64\"" Jul 10 00:30:00.646487 containerd[1551]: time="2025-07-10T00:30:00.643621608Z" level=info msg="StartContainer for \"8121ce5a13270db811286b1257d50b90ff40a94da9ebd95699a2c14319137b27\" returns successfully" Jul 10 00:30:00.924119 containerd[1551]: time="2025-07-10T00:30:00.923721851Z" level=info msg="StopPodSandbox for \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\"" Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:00.975 [INFO][5104] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:00.975 [INFO][5104] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" iface="eth0" netns="/var/run/netns/cni-f112dd94-0de1-786e-f56d-c05839c552b4" Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:00.975 [INFO][5104] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" iface="eth0" netns="/var/run/netns/cni-f112dd94-0de1-786e-f56d-c05839c552b4" Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:00.976 [INFO][5104] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" iface="eth0" netns="/var/run/netns/cni-f112dd94-0de1-786e-f56d-c05839c552b4" Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:00.976 [INFO][5104] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:00.976 [INFO][5104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:00.995 [INFO][5113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" HandleID="k8s-pod-network.37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:00.995 [INFO][5113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:00.995 [INFO][5113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:01.007 [WARNING][5113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" HandleID="k8s-pod-network.37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:01.007 [INFO][5113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" HandleID="k8s-pod-network.37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:01.009 [INFO][5113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:01.013785 containerd[1551]: 2025-07-10 00:30:01.010 [INFO][5104] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:01.014244 containerd[1551]: time="2025-07-10T00:30:01.014164564Z" level=info msg="TearDown network for sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\" successfully" Jul 10 00:30:01.014244 containerd[1551]: time="2025-07-10T00:30:01.014197644Z" level=info msg="StopPodSandbox for \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\" returns successfully" Jul 10 00:30:01.018765 containerd[1551]: time="2025-07-10T00:30:01.018713678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c98c58b9b-vxtxm,Uid:3a75ce03-7b22-4075-a20a-956c07a61ee9,Namespace:calico-system,Attempt:1,}" Jul 10 00:30:01.020814 systemd[1]: run-netns-cni\x2df112dd94\x2d0de1\x2d786e\x2df56d\x2dc05839c552b4.mount: Deactivated successfully. Jul 10 00:30:01.020965 systemd[1]: run-netns-cni\x2d31e3cfa3\x2d3292\x2db46c\x2d7691\x2d4ef9294e8616.mount: Deactivated successfully. Jul 10 00:30:01.126544 containerd[1551]: time="2025-07-10T00:30:01.126491695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:01.129596 containerd[1551]: time="2025-07-10T00:30:01.129550410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 10 00:30:01.133088 containerd[1551]: time="2025-07-10T00:30:01.132929686Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:01.136065 containerd[1551]: time="2025-07-10T00:30:01.136028402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:01.136981 containerd[1551]: time="2025-07-10T00:30:01.136939961Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.902465213s" Jul 10 00:30:01.136981 containerd[1551]: time="2025-07-10T00:30:01.136976401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 00:30:01.140180 containerd[1551]: time="2025-07-10T00:30:01.139943037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 00:30:01.146943 containerd[1551]: time="2025-07-10T00:30:01.146905187Z" level=info msg="CreateContainer within sandbox \"6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:30:01.159590 containerd[1551]: time="2025-07-10T00:30:01.159546691Z" level=info msg="CreateContainer within sandbox \"6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"baaca7e34f7acc2ff711ff5c3c1fda5cd217ca403855c74c8a72c428ad7a8058\"" Jul 10 00:30:01.160478 containerd[1551]: time="2025-07-10T00:30:01.160221370Z" level=info msg="StartContainer for \"baaca7e34f7acc2ff711ff5c3c1fda5cd217ca403855c74c8a72c428ad7a8058\"" Jul 10 00:30:01.177873 kubelet[2626]: E0710 00:30:01.177779 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:01.183788 kubelet[2626]: E0710 00:30:01.183625 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:01.199168 kubelet[2626]: I0710 00:30:01.198725 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wxx7d" podStartSLOduration=36.198707599 podStartE2EDuration="36.198707599s" podCreationTimestamp="2025-07-10 00:29:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 00:30:01.198271879 +0000 UTC m=+42.354881302" watchObservedRunningTime="2025-07-10 00:30:01.198707599 +0000 UTC m=+42.355317022" Jul 10 00:30:01.258632 systemd-networkd[1229]: cali38e15ba6e2d: Link UP Jul 10 00:30:01.259762 systemd-networkd[1229]: cali38e15ba6e2d: Gained carrier Jul 10 00:30:01.274862 containerd[1551]: time="2025-07-10T00:30:01.274815737Z" level=info msg="StartContainer for \"baaca7e34f7acc2ff711ff5c3c1fda5cd217ca403855c74c8a72c428ad7a8058\" returns successfully" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.150 [INFO][5126] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.166 [INFO][5126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0 calico-kube-controllers-7c98c58b9b- calico-system 3a75ce03-7b22-4075-a20a-956c07a61ee9 1005 0 2025-07-10 00:29:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c98c58b9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c98c58b9b-vxtxm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali38e15ba6e2d [] [] }} ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Namespace="calico-system" Pod="calico-kube-controllers-7c98c58b9b-vxtxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.166 [INFO][5126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Namespace="calico-system" Pod="calico-kube-controllers-7c98c58b9b-vxtxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.206 [INFO][5147] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" HandleID="k8s-pod-network.7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.207 [INFO][5147] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" HandleID="k8s-pod-network.7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c98c58b9b-vxtxm", "timestamp":"2025-07-10 00:30:01.206899828 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.207 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.207 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.207 [INFO][5147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.218 [INFO][5147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" host="localhost" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.224 [INFO][5147] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.235 [INFO][5147] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.238 [INFO][5147] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.241 [INFO][5147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.241 [INFO][5147] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" host="localhost" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.243 [INFO][5147] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.247 [INFO][5147] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" host="localhost" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.253 [INFO][5147] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" host="localhost" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.253 [INFO][5147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" host="localhost" Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.253 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:01.283028 containerd[1551]: 2025-07-10 00:30:01.253 [INFO][5147] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" HandleID="k8s-pod-network.7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:01.283725 containerd[1551]: 2025-07-10 00:30:01.255 [INFO][5126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Namespace="calico-system" Pod="calico-kube-controllers-7c98c58b9b-vxtxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0", GenerateName:"calico-kube-controllers-7c98c58b9b-", Namespace:"calico-system", SelfLink:"", UID:"3a75ce03-7b22-4075-a20a-956c07a61ee9", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c98c58b9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c98c58b9b-vxtxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali38e15ba6e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:01.283725 containerd[1551]: 2025-07-10 00:30:01.255 [INFO][5126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Namespace="calico-system" Pod="calico-kube-controllers-7c98c58b9b-vxtxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:01.283725 containerd[1551]: 2025-07-10 00:30:01.255 [INFO][5126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38e15ba6e2d ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Namespace="calico-system" Pod="calico-kube-controllers-7c98c58b9b-vxtxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:01.283725 containerd[1551]: 2025-07-10 00:30:01.260 [INFO][5126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Namespace="calico-system" Pod="calico-kube-controllers-7c98c58b9b-vxtxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:01.283725 containerd[1551]: 2025-07-10 00:30:01.261 [INFO][5126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Namespace="calico-system" Pod="calico-kube-controllers-7c98c58b9b-vxtxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0", GenerateName:"calico-kube-controllers-7c98c58b9b-", Namespace:"calico-system", SelfLink:"", UID:"3a75ce03-7b22-4075-a20a-956c07a61ee9", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c98c58b9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb", Pod:"calico-kube-controllers-7c98c58b9b-vxtxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali38e15ba6e2d", MAC:"36:7d:88:de:8a:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:01.283725 containerd[1551]: 2025-07-10 00:30:01.277 [INFO][5126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb" Namespace="calico-system" Pod="calico-kube-controllers-7c98c58b9b-vxtxm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:01.299291 containerd[1551]: time="2025-07-10T00:30:01.299201065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:01.299291 containerd[1551]: time="2025-07-10T00:30:01.299253225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:01.299291 containerd[1551]: time="2025-07-10T00:30:01.299272145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:01.299681 containerd[1551]: time="2025-07-10T00:30:01.299583064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:01.328858 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:30:01.358742 containerd[1551]: time="2025-07-10T00:30:01.358700026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c98c58b9b-vxtxm,Uid:3a75ce03-7b22-4075-a20a-956c07a61ee9,Namespace:calico-system,Attempt:1,} returns sandbox id \"7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb\"" Jul 10 00:30:01.760456 containerd[1551]: time="2025-07-10T00:30:01.760409852Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:01.761367 containerd[1551]: time="2025-07-10T00:30:01.761329051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 10 00:30:01.763715 containerd[1551]: time="2025-07-10T00:30:01.763680768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 623.676411ms" Jul 10 00:30:01.763760 containerd[1551]: time="2025-07-10T00:30:01.763727328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 10 00:30:01.765330 containerd[1551]: time="2025-07-10T00:30:01.764861726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 00:30:01.766439 containerd[1551]: time="2025-07-10T00:30:01.766395564Z" level=info msg="CreateContainer within sandbox \"5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 00:30:01.783403 containerd[1551]: time="2025-07-10T00:30:01.783341821Z" level=info msg="CreateContainer within sandbox \"5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"62359fb250be60e7bc0057191dfaa498bca7a9934b95ac8d068064e56b3fe94e\"" Jul 10 00:30:01.784580 containerd[1551]: time="2025-07-10T00:30:01.783817741Z" level=info msg="StartContainer for \"62359fb250be60e7bc0057191dfaa498bca7a9934b95ac8d068064e56b3fe94e\"" Jul 10 00:30:01.857287 containerd[1551]: time="2025-07-10T00:30:01.855979885Z" level=info msg="StartContainer for \"62359fb250be60e7bc0057191dfaa498bca7a9934b95ac8d068064e56b3fe94e\" returns successfully" Jul 10 00:30:01.923037 containerd[1551]: time="2025-07-10T00:30:01.923001236Z" level=info msg="StopPodSandbox for \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\"" Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:01.982 [INFO][5300] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:01.982 [INFO][5300] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" iface="eth0" netns="/var/run/netns/cni-d6126540-f1bd-9002-f639-1e20b724eb45" Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:01.982 [INFO][5300] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" iface="eth0" netns="/var/run/netns/cni-d6126540-f1bd-9002-f639-1e20b724eb45" Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:01.984 [INFO][5300] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" iface="eth0" netns="/var/run/netns/cni-d6126540-f1bd-9002-f639-1e20b724eb45" Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:01.984 [INFO][5300] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:01.984 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:02.010 [INFO][5316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" HandleID="k8s-pod-network.0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:02.010 [INFO][5316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:02.010 [INFO][5316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:02.023 [WARNING][5316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" HandleID="k8s-pod-network.0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:02.023 [INFO][5316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" HandleID="k8s-pod-network.0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:02.026 [INFO][5316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:02.038063 containerd[1551]: 2025-07-10 00:30:02.031 [INFO][5300] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:02.038063 containerd[1551]: time="2025-07-10T00:30:02.035659329Z" level=info msg="TearDown network for sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\" successfully" Jul 10 00:30:02.038063 containerd[1551]: time="2025-07-10T00:30:02.035685089Z" level=info msg="StopPodSandbox for \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\" returns successfully" Jul 10 00:30:02.039460 containerd[1551]: time="2025-07-10T00:30:02.039213445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hhddt,Uid:4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd,Namespace:calico-system,Attempt:1,}" Jul 10 00:30:02.044014 systemd[1]: run-netns-cni\x2dd6126540\x2df1bd\x2d9002\x2df639\x2d1e20b724eb45.mount: Deactivated successfully. Jul 10 00:30:02.170069 systemd-networkd[1229]: cali171c365e1bd: Link UP Jul 10 00:30:02.171088 systemd-networkd[1229]: cali171c365e1bd: Gained carrier Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.081 [INFO][5326] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.095 [INFO][5326] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--hhddt-eth0 goldmane-58fd7646b9- calico-system 4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd 1034 0 2025-07-10 00:29:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-hhddt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali171c365e1bd [] [] }} ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Namespace="calico-system" Pod="goldmane-58fd7646b9-hhddt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--hhddt-" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.095 [INFO][5326] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Namespace="calico-system" Pod="goldmane-58fd7646b9-hhddt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.121 [INFO][5340] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" HandleID="k8s-pod-network.0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.121 [INFO][5340] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" HandleID="k8s-pod-network.0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd000), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-hhddt", "timestamp":"2025-07-10 00:30:02.121274342 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.121 [INFO][5340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.121 [INFO][5340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.121 [INFO][5340] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.132 [INFO][5340] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" host="localhost" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.136 [INFO][5340] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.146 [INFO][5340] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.148 [INFO][5340] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.153 [INFO][5340] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.153 [INFO][5340] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" host="localhost" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.155 [INFO][5340] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8 Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.159 [INFO][5340] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" host="localhost" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.164 [INFO][5340] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" host="localhost" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.164 [INFO][5340] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" host="localhost" Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.164 [INFO][5340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:02.190272 containerd[1551]: 2025-07-10 00:30:02.164 [INFO][5340] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" HandleID="k8s-pod-network.0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:02.192278 containerd[1551]: 2025-07-10 00:30:02.167 [INFO][5326] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Namespace="calico-system" Pod="goldmane-58fd7646b9-hhddt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--hhddt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-hhddt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali171c365e1bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:02.192278 containerd[1551]: 2025-07-10 00:30:02.167 [INFO][5326] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Namespace="calico-system" Pod="goldmane-58fd7646b9-hhddt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:02.192278 containerd[1551]: 2025-07-10 00:30:02.167 [INFO][5326] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali171c365e1bd ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Namespace="calico-system" Pod="goldmane-58fd7646b9-hhddt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:02.192278 containerd[1551]: 2025-07-10 00:30:02.171 [INFO][5326] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Namespace="calico-system" Pod="goldmane-58fd7646b9-hhddt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:02.192278 containerd[1551]: 2025-07-10 00:30:02.172 [INFO][5326] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Namespace="calico-system" Pod="goldmane-58fd7646b9-hhddt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--hhddt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8", Pod:"goldmane-58fd7646b9-hhddt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali171c365e1bd", MAC:"c2:f2:d6:ff:0a:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:02.192278 containerd[1551]: 2025-07-10 00:30:02.186 [INFO][5326] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8" Namespace="calico-system" Pod="goldmane-58fd7646b9-hhddt" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:02.203868 kubelet[2626]: E0710 00:30:02.201158 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:02.203868 kubelet[2626]: E0710 00:30:02.201431 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:02.207346 kubelet[2626]: I0710 00:30:02.205524 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d465fcf7d-ksj25" podStartSLOduration=25.302048465 podStartE2EDuration="28.205507477s" podCreationTimestamp="2025-07-10 00:29:34 +0000 UTC" firstStartedPulling="2025-07-10 00:29:58.234222628 +0000 UTC m=+39.390832051" lastFinishedPulling="2025-07-10 00:30:01.13768164 +0000 UTC m=+42.294291063" observedRunningTime="2025-07-10 00:30:02.204497319 +0000 UTC m=+43.361106742" watchObservedRunningTime="2025-07-10 00:30:02.205507477 +0000 UTC m=+43.362116900" Jul 10 00:30:02.221699 systemd-networkd[1229]: cali1278d42898b: Gained IPv6LL Jul 10 00:30:02.224231 systemd-networkd[1229]: cali8ca8ff2e3a8: Gained IPv6LL Jul 10 00:30:02.227675 kubelet[2626]: I0710 00:30:02.226419 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d465fcf7d-bpp6r" podStartSLOduration=26.938964208 podStartE2EDuration="28.226400611s" podCreationTimestamp="2025-07-10 00:29:34 +0000 UTC" firstStartedPulling="2025-07-10 00:30:00.477008084 +0000 UTC m=+41.633617467" lastFinishedPulling="2025-07-10 00:30:01.764444447 +0000 UTC m=+42.921053870" observedRunningTime="2025-07-10 00:30:02.225926172 +0000 UTC m=+43.382535595" watchObservedRunningTime="2025-07-10 00:30:02.226400611 +0000 UTC m=+43.383010034" Jul 10 00:30:02.239989 containerd[1551]: time="2025-07-10T00:30:02.239582515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 00:30:02.239989 containerd[1551]: time="2025-07-10T00:30:02.239651195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 00:30:02.239989 containerd[1551]: time="2025-07-10T00:30:02.239665835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:02.239989 containerd[1551]: time="2025-07-10T00:30:02.239764835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 00:30:02.270693 systemd-resolved[1436]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 00:30:02.283522 systemd-networkd[1229]: cali076b5175f7c: Gained IPv6LL Jul 10 00:30:02.306419 containerd[1551]: time="2025-07-10T00:30:02.306294312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-hhddt,Uid:4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd,Namespace:calico-system,Attempt:1,} returns sandbox id \"0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8\"" Jul 10 00:30:02.923773 systemd-networkd[1229]: cali38e15ba6e2d: Gained IPv6LL Jul 10 00:30:03.010012 kubelet[2626]: I0710 00:30:03.009978 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:30:03.010916 kubelet[2626]: E0710 00:30:03.010541 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:03.203596 kubelet[2626]: I0710 00:30:03.203492 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:30:03.204504 kubelet[2626]: E0710 00:30:03.204055 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:03.204504 kubelet[2626]: E0710 00:30:03.204154 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:03.205146 kubelet[2626]: I0710 00:30:03.204854 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:30:03.692598 systemd-networkd[1229]: cali171c365e1bd: Gained IPv6LL Jul 10 00:30:03.761290 kernel: bpftool[5465]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 10 00:30:03.850862 containerd[1551]: time="2025-07-10T00:30:03.850817613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:03.851767 containerd[1551]: time="2025-07-10T00:30:03.851738412Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 10 00:30:03.852792 containerd[1551]: time="2025-07-10T00:30:03.852759211Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:03.854717 containerd[1551]: time="2025-07-10T00:30:03.854680249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:03.855681 containerd[1551]: time="2025-07-10T00:30:03.855640608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 2.090743722s" Jul 10 00:30:03.855726 containerd[1551]: time="2025-07-10T00:30:03.855681368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 10 00:30:03.857017 containerd[1551]: time="2025-07-10T00:30:03.856922446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 00:30:03.864034 containerd[1551]: time="2025-07-10T00:30:03.863949758Z" level=info msg="CreateContainer within sandbox \"a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 00:30:03.879347 containerd[1551]: time="2025-07-10T00:30:03.879144260Z" level=info msg="CreateContainer within sandbox \"a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b9e3c1c61ad4baf0621622ef9b8cac7864d8d8b0aa156718f645cc21ba5c3e5a\"" Jul 10 00:30:03.882436 containerd[1551]: time="2025-07-10T00:30:03.879943379Z" level=info msg="StartContainer for \"b9e3c1c61ad4baf0621622ef9b8cac7864d8d8b0aa156718f645cc21ba5c3e5a\"" Jul 10 00:30:03.973569 containerd[1551]: time="2025-07-10T00:30:03.972544391Z" level=info msg="StartContainer for \"b9e3c1c61ad4baf0621622ef9b8cac7864d8d8b0aa156718f645cc21ba5c3e5a\" returns successfully" Jul 10 00:30:03.981319 systemd-networkd[1229]: vxlan.calico: Link UP Jul 10 00:30:03.981324 systemd-networkd[1229]: vxlan.calico: Gained carrier Jul 10 00:30:05.420635 systemd-networkd[1229]: vxlan.calico: Gained IPv6LL Jul 10 00:30:05.482654 systemd[1]: Started sshd@8-10.0.0.65:22-10.0.0.1:38302.service - OpenSSH per-connection server daemon (10.0.0.1:38302). Jul 10 00:30:05.551327 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 38302 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:05.553161 sshd[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:05.559875 systemd-logind[1524]: New session 9 of user core. Jul 10 00:30:05.566058 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 10 00:30:05.847498 sshd[5594]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:05.851198 systemd[1]: sshd@8-10.0.0.65:22-10.0.0.1:38302.service: Deactivated successfully. Jul 10 00:30:05.853898 systemd-logind[1524]: Session 9 logged out. Waiting for processes to exit. Jul 10 00:30:05.854149 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 00:30:05.856074 systemd-logind[1524]: Removed session 9. Jul 10 00:30:05.968921 containerd[1551]: time="2025-07-10T00:30:05.968508029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:05.969419 containerd[1551]: time="2025-07-10T00:30:05.969383068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 10 00:30:05.970066 containerd[1551]: time="2025-07-10T00:30:05.970014028Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:05.972528 containerd[1551]: time="2025-07-10T00:30:05.972491505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:05.973386 containerd[1551]: time="2025-07-10T00:30:05.973255064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.116293818s" Jul 10 00:30:05.973386 containerd[1551]: time="2025-07-10T00:30:05.973288704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 10 00:30:05.974601 containerd[1551]: time="2025-07-10T00:30:05.974425863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 00:30:05.985015 containerd[1551]: time="2025-07-10T00:30:05.984977252Z" level=info msg="CreateContainer within sandbox \"7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 00:30:05.995101 containerd[1551]: time="2025-07-10T00:30:05.994972362Z" level=info msg="CreateContainer within sandbox \"7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d5724dab13d8b818487100b9182630df7459f23cfa1a85663ea4a47f726bf05d\"" Jul 10 00:30:05.996609 containerd[1551]: time="2025-07-10T00:30:05.996567120Z" level=info msg="StartContainer for \"d5724dab13d8b818487100b9182630df7459f23cfa1a85663ea4a47f726bf05d\"" Jul 10 00:30:06.064414 containerd[1551]: time="2025-07-10T00:30:06.064374935Z" level=info msg="StartContainer for \"d5724dab13d8b818487100b9182630df7459f23cfa1a85663ea4a47f726bf05d\" returns successfully" Jul 10 00:30:06.239672 kubelet[2626]: I0710 00:30:06.239440 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c98c58b9b-vxtxm" podStartSLOduration=22.626506685 podStartE2EDuration="27.239422206s" podCreationTimestamp="2025-07-10 00:29:39 +0000 UTC" firstStartedPulling="2025-07-10 00:30:01.361332982 +0000 UTC m=+42.517942405" lastFinishedPulling="2025-07-10 00:30:05.974248503 +0000 UTC m=+47.130857926" observedRunningTime="2025-07-10 00:30:06.238156927 +0000 UTC m=+47.394766390" watchObservedRunningTime="2025-07-10 00:30:06.239422206 +0000 UTC m=+47.396031589" Jul 10 00:30:07.220447 kubelet[2626]: I0710 00:30:07.219947 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:30:07.563305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980786626.mount: Deactivated successfully. Jul 10 00:30:08.108026 containerd[1551]: time="2025-07-10T00:30:08.107974840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:08.110870 containerd[1551]: time="2025-07-10T00:30:08.110832238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 10 00:30:08.116541 containerd[1551]: time="2025-07-10T00:30:08.116508873Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:08.118955 containerd[1551]: time="2025-07-10T00:30:08.118917271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:08.120448 containerd[1551]: time="2025-07-10T00:30:08.120409830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.145952007s" Jul 10 00:30:08.120448 containerd[1551]: time="2025-07-10T00:30:08.120445710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 10 00:30:08.121941 containerd[1551]: time="2025-07-10T00:30:08.121864228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 00:30:08.122761 containerd[1551]: time="2025-07-10T00:30:08.122627788Z" level=info msg="CreateContainer within sandbox \"0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 00:30:08.137287 containerd[1551]: time="2025-07-10T00:30:08.137122416Z" level=info msg="CreateContainer within sandbox \"0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fb2a74b55a2f326e723a348d3c6544abf45f93811ef16db57764fed5bcc2eebb\"" Jul 10 00:30:08.138058 containerd[1551]: time="2025-07-10T00:30:08.138028455Z" level=info msg="StartContainer for \"fb2a74b55a2f326e723a348d3c6544abf45f93811ef16db57764fed5bcc2eebb\"" Jul 10 00:30:08.191828 containerd[1551]: time="2025-07-10T00:30:08.191785129Z" level=info msg="StartContainer for \"fb2a74b55a2f326e723a348d3c6544abf45f93811ef16db57764fed5bcc2eebb\" returns successfully" Jul 10 00:30:08.236962 kubelet[2626]: I0710 00:30:08.236899 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-hhddt" podStartSLOduration=24.423199812 podStartE2EDuration="30.236882211s" podCreationTimestamp="2025-07-10 00:29:38 +0000 UTC" firstStartedPulling="2025-07-10 00:30:02.30745343 +0000 UTC m=+43.464062853" lastFinishedPulling="2025-07-10 00:30:08.121135829 +0000 UTC m=+49.277745252" observedRunningTime="2025-07-10 00:30:08.234757013 +0000 UTC m=+49.391366436" watchObservedRunningTime="2025-07-10 00:30:08.236882211 +0000 UTC m=+49.393491594" Jul 10 00:30:09.411565 containerd[1551]: time="2025-07-10T00:30:09.411517039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:09.412659 containerd[1551]: time="2025-07-10T00:30:09.412591958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 10 00:30:09.416374 containerd[1551]: time="2025-07-10T00:30:09.416031075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.294125607s" Jul 10 00:30:09.416374 containerd[1551]: time="2025-07-10T00:30:09.416065115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 10 00:30:09.420025 containerd[1551]: time="2025-07-10T00:30:09.419990112Z" level=info msg="CreateContainer within sandbox \"a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 00:30:09.421504 containerd[1551]: time="2025-07-10T00:30:09.420889712Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:09.421651 containerd[1551]: time="2025-07-10T00:30:09.421620991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 10 00:30:09.437531 containerd[1551]: time="2025-07-10T00:30:09.437486498Z" level=info msg="CreateContainer within sandbox \"a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d48abf9ce5013952b2df0ae5c0939787ba35280e49f1f251bf96735996ca30ab\"" Jul 10 00:30:09.438735 containerd[1551]: time="2025-07-10T00:30:09.438701897Z" level=info msg="StartContainer for \"d48abf9ce5013952b2df0ae5c0939787ba35280e49f1f251bf96735996ca30ab\"" Jul 10 00:30:09.498445 containerd[1551]: time="2025-07-10T00:30:09.498051410Z" level=info msg="StartContainer for \"d48abf9ce5013952b2df0ae5c0939787ba35280e49f1f251bf96735996ca30ab\" returns successfully" Jul 10 00:30:09.567994 kubelet[2626]: I0710 00:30:09.567941 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:30:10.022453 kubelet[2626]: I0710 00:30:10.022413 2626 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 00:30:10.022614 kubelet[2626]: I0710 00:30:10.022484 2626 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 00:30:10.243223 kubelet[2626]: I0710 00:30:10.243075 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qqs9d" podStartSLOduration=23.418246397 podStartE2EDuration="32.243045791s" podCreationTimestamp="2025-07-10 00:29:38 +0000 UTC" firstStartedPulling="2025-07-10 00:30:00.592082201 +0000 UTC m=+41.748691624" lastFinishedPulling="2025-07-10 00:30:09.416881595 +0000 UTC m=+50.573491018" observedRunningTime="2025-07-10 00:30:10.242263992 +0000 UTC m=+51.398873415" watchObservedRunningTime="2025-07-10 00:30:10.243045791 +0000 UTC m=+51.399655214" Jul 10 00:30:10.861666 systemd[1]: Started sshd@9-10.0.0.65:22-10.0.0.1:38304.service - OpenSSH per-connection server daemon (10.0.0.1:38304). Jul 10 00:30:10.913475 sshd[5848]: Accepted publickey for core from 10.0.0.1 port 38304 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:10.915391 sshd[5848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:10.921111 systemd-logind[1524]: New session 10 of user core. Jul 10 00:30:10.924767 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 10 00:30:11.256612 sshd[5848]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:11.266786 systemd[1]: Started sshd@10-10.0.0.65:22-10.0.0.1:38320.service - OpenSSH per-connection server daemon (10.0.0.1:38320). Jul 10 00:30:11.267188 systemd[1]: sshd@9-10.0.0.65:22-10.0.0.1:38304.service: Deactivated successfully. Jul 10 00:30:11.273015 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 00:30:11.274993 systemd-logind[1524]: Session 10 logged out. Waiting for processes to exit. Jul 10 00:30:11.279097 systemd-logind[1524]: Removed session 10. Jul 10 00:30:11.299554 sshd[5864]: Accepted publickey for core from 10.0.0.1 port 38320 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:11.301136 sshd[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:11.306938 systemd-logind[1524]: New session 11 of user core. Jul 10 00:30:11.316663 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 10 00:30:11.526469 sshd[5864]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:11.542000 systemd[1]: Started sshd@11-10.0.0.65:22-10.0.0.1:38332.service - OpenSSH per-connection server daemon (10.0.0.1:38332). Jul 10 00:30:11.542969 systemd[1]: sshd@10-10.0.0.65:22-10.0.0.1:38320.service: Deactivated successfully. Jul 10 00:30:11.548729 systemd-logind[1524]: Session 11 logged out. Waiting for processes to exit. Jul 10 00:30:11.548884 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 00:30:11.554804 systemd-logind[1524]: Removed session 11. Jul 10 00:30:11.588726 sshd[5878]: Accepted publickey for core from 10.0.0.1 port 38332 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:11.590105 sshd[5878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:11.594732 systemd-logind[1524]: New session 12 of user core. Jul 10 00:30:11.607697 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 10 00:30:11.771285 sshd[5878]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:11.775177 systemd[1]: sshd@11-10.0.0.65:22-10.0.0.1:38332.service: Deactivated successfully. Jul 10 00:30:11.779327 systemd-logind[1524]: Session 12 logged out. Waiting for processes to exit. Jul 10 00:30:11.779528 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 00:30:11.781720 systemd-logind[1524]: Removed session 12. Jul 10 00:30:16.785818 systemd[1]: Started sshd@12-10.0.0.65:22-10.0.0.1:48672.service - OpenSSH per-connection server daemon (10.0.0.1:48672). Jul 10 00:30:16.817025 sshd[5909]: Accepted publickey for core from 10.0.0.1 port 48672 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:16.818436 sshd[5909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:16.822434 systemd-logind[1524]: New session 13 of user core. Jul 10 00:30:16.833728 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 10 00:30:16.966730 sshd[5909]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:16.970598 systemd[1]: sshd@12-10.0.0.65:22-10.0.0.1:48672.service: Deactivated successfully. Jul 10 00:30:16.974491 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 00:30:16.975587 systemd-logind[1524]: Session 13 logged out. Waiting for processes to exit. Jul 10 00:30:16.977092 systemd-logind[1524]: Removed session 13. Jul 10 00:30:18.924986 containerd[1551]: time="2025-07-10T00:30:18.924899684Z" level=info msg="StopPodSandbox for \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\"" Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:18.990 [WARNING][5935] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--d987s-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3a625459-29c1-438f-ae9d-de10e2e06fa6", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa", Pod:"coredns-7c65d6cfc9-d987s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd9b7ba489e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:18.991 [INFO][5935] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:18.991 [INFO][5935] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" iface="eth0" netns="" Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:18.991 [INFO][5935] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:18.991 [INFO][5935] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:19.010 [INFO][5944] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" HandleID="k8s-pod-network.cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:19.010 [INFO][5944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:19.010 [INFO][5944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:19.018 [WARNING][5944] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" HandleID="k8s-pod-network.cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:19.018 [INFO][5944] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" HandleID="k8s-pod-network.cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:19.020 [INFO][5944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.023487 containerd[1551]: 2025-07-10 00:30:19.022 [INFO][5935] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:30:19.024193 containerd[1551]: time="2025-07-10T00:30:19.024058080Z" level=info msg="TearDown network for sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\" successfully" Jul 10 00:30:19.024193 containerd[1551]: time="2025-07-10T00:30:19.024088360Z" level=info msg="StopPodSandbox for \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\" returns successfully" Jul 10 00:30:19.024669 containerd[1551]: time="2025-07-10T00:30:19.024644160Z" level=info msg="RemovePodSandbox for \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\"" Jul 10 00:30:19.036726 containerd[1551]: time="2025-07-10T00:30:19.036680795Z" level=info msg="Forcibly stopping sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\"" Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.086 [WARNING][5962] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--d987s-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3a625459-29c1-438f-ae9d-de10e2e06fa6", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebf25de24877f8db76f849b79b7399aae24119a1571175317ada1fa45cf569aa", Pod:"coredns-7c65d6cfc9-d987s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidd9b7ba489e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.086 [INFO][5962] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.086 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" iface="eth0" netns="" Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.086 [INFO][5962] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.086 [INFO][5962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.106 [INFO][5972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" HandleID="k8s-pod-network.cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.106 [INFO][5972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.106 [INFO][5972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.114 [WARNING][5972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" HandleID="k8s-pod-network.cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.114 [INFO][5972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" HandleID="k8s-pod-network.cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Workload="localhost-k8s-coredns--7c65d6cfc9--d987s-eth0" Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.116 [INFO][5972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.119765 containerd[1551]: 2025-07-10 00:30:19.118 [INFO][5962] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb" Jul 10 00:30:19.120191 containerd[1551]: time="2025-07-10T00:30:19.119784320Z" level=info msg="TearDown network for sandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\" successfully" Jul 10 00:30:19.130191 containerd[1551]: time="2025-07-10T00:30:19.130134796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:30:19.130310 containerd[1551]: time="2025-07-10T00:30:19.130231996Z" level=info msg="RemovePodSandbox \"cd7a94cea5621e186b65e50de2508ce1e40236ee77d6e85d69b419e77eb330cb\" returns successfully" Jul 10 00:30:19.131046 containerd[1551]: time="2025-07-10T00:30:19.130764316Z" level=info msg="StopPodSandbox for \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\"" Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.167 [WARNING][5990] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0", GenerateName:"calico-apiserver-5d465fcf7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b394ef9-6acd-4661-b521-8820f934f5ed", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465fcf7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f", Pod:"calico-apiserver-5d465fcf7d-ksj25", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicbbe9d56bb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.168 [INFO][5990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.168 [INFO][5990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" iface="eth0" netns="" Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.168 [INFO][5990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.168 [INFO][5990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.192 [INFO][5999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" HandleID="k8s-pod-network.5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.192 [INFO][5999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.192 [INFO][5999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.200 [WARNING][5999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" HandleID="k8s-pod-network.5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.200 [INFO][5999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" HandleID="k8s-pod-network.5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.202 [INFO][5999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.206717 containerd[1551]: 2025-07-10 00:30:19.204 [INFO][5990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:30:19.207227 containerd[1551]: time="2025-07-10T00:30:19.207199804Z" level=info msg="TearDown network for sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\" successfully" Jul 10 00:30:19.207429 containerd[1551]: time="2025-07-10T00:30:19.207299044Z" level=info msg="StopPodSandbox for \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\" returns successfully" Jul 10 00:30:19.207851 containerd[1551]: time="2025-07-10T00:30:19.207823644Z" level=info msg="RemovePodSandbox for \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\"" Jul 10 00:30:19.207904 containerd[1551]: time="2025-07-10T00:30:19.207860964Z" level=info msg="Forcibly stopping sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\"" Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.239 [WARNING][6017] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0", GenerateName:"calico-apiserver-5d465fcf7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0b394ef9-6acd-4661-b521-8820f934f5ed", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465fcf7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6113e8027a179f86a4f149567bbb28118937dbf225f5d129c38d5ab4754b397f", Pod:"calico-apiserver-5d465fcf7d-ksj25", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicbbe9d56bb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.239 [INFO][6017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.239 [INFO][6017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" iface="eth0" netns="" Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.239 [INFO][6017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.239 [INFO][6017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.261 [INFO][6026] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" HandleID="k8s-pod-network.5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.261 [INFO][6026] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.261 [INFO][6026] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.269 [WARNING][6026] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" HandleID="k8s-pod-network.5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.269 [INFO][6026] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" HandleID="k8s-pod-network.5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--ksj25-eth0" Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.271 [INFO][6026] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.275089 containerd[1551]: 2025-07-10 00:30:19.273 [INFO][6017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883" Jul 10 00:30:19.275530 containerd[1551]: time="2025-07-10T00:30:19.275112216Z" level=info msg="TearDown network for sandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\" successfully" Jul 10 00:30:19.288053 containerd[1551]: time="2025-07-10T00:30:19.287991210Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:30:19.288053 containerd[1551]: time="2025-07-10T00:30:19.288057890Z" level=info msg="RemovePodSandbox \"5e3ec09e63480aa5bb7f285e72eea60c3a5ce3879edb561da433e3e1aeda4883\" returns successfully" Jul 10 00:30:19.288498 containerd[1551]: time="2025-07-10T00:30:19.288470250Z" level=info msg="StopPodSandbox for \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\"" Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.321 [WARNING][6044] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--hhddt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8", Pod:"goldmane-58fd7646b9-hhddt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali171c365e1bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.321 [INFO][6044] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.321 [INFO][6044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" iface="eth0" netns="" Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.321 [INFO][6044] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.321 [INFO][6044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.340 [INFO][6053] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" HandleID="k8s-pod-network.0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.340 [INFO][6053] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.340 [INFO][6053] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.349 [WARNING][6053] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" HandleID="k8s-pod-network.0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.349 [INFO][6053] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" HandleID="k8s-pod-network.0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.350 [INFO][6053] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.353996 containerd[1551]: 2025-07-10 00:30:19.352 [INFO][6044] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:19.355010 containerd[1551]: time="2025-07-10T00:30:19.354021783Z" level=info msg="TearDown network for sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\" successfully" Jul 10 00:30:19.355010 containerd[1551]: time="2025-07-10T00:30:19.354047343Z" level=info msg="StopPodSandbox for \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\" returns successfully" Jul 10 00:30:19.355010 containerd[1551]: time="2025-07-10T00:30:19.354546183Z" level=info msg="RemovePodSandbox for \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\"" Jul 10 00:30:19.355010 containerd[1551]: time="2025-07-10T00:30:19.354577543Z" level=info msg="Forcibly stopping sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\"" Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.389 [WARNING][6071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--hhddt-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"4f8b5bfa-1ed7-4e08-8731-9d0620e99ffd", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e6a859eef594385120ad584100d288092c60ce59c8960896627feb82e815ea8", Pod:"goldmane-58fd7646b9-hhddt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali171c365e1bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.390 [INFO][6071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.390 [INFO][6071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" iface="eth0" netns="" Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.390 [INFO][6071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.390 [INFO][6071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.408 [INFO][6080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" HandleID="k8s-pod-network.0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.408 [INFO][6080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.408 [INFO][6080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.416 [WARNING][6080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" HandleID="k8s-pod-network.0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.416 [INFO][6080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" HandleID="k8s-pod-network.0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Workload="localhost-k8s-goldmane--58fd7646b9--hhddt-eth0" Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.417 [INFO][6080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.422528 containerd[1551]: 2025-07-10 00:30:19.419 [INFO][6071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0" Jul 10 00:30:19.422528 containerd[1551]: time="2025-07-10T00:30:19.421562995Z" level=info msg="TearDown network for sandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\" successfully" Jul 10 00:30:19.453535 containerd[1551]: time="2025-07-10T00:30:19.453490662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:30:19.453811 containerd[1551]: time="2025-07-10T00:30:19.453779901Z" level=info msg="RemovePodSandbox \"0d52e90b79513914355bb40833ca4db89e0426593bf9e5ab8284f431e93c05f0\" returns successfully" Jul 10 00:30:19.454706 containerd[1551]: time="2025-07-10T00:30:19.454670901Z" level=info msg="StopPodSandbox for \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\"" Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.490 [WARNING][6097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqs9d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64", Pod:"csi-node-driver-qqs9d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ca8ff2e3a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.490 [INFO][6097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.490 [INFO][6097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" iface="eth0" netns="" Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.490 [INFO][6097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.490 [INFO][6097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.509 [INFO][6105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" HandleID="k8s-pod-network.d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.509 [INFO][6105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.509 [INFO][6105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.517 [WARNING][6105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" HandleID="k8s-pod-network.d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.517 [INFO][6105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" HandleID="k8s-pod-network.d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.518 [INFO][6105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.522577 containerd[1551]: 2025-07-10 00:30:19.520 [INFO][6097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:19.522577 containerd[1551]: time="2025-07-10T00:30:19.522550913Z" level=info msg="TearDown network for sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\" successfully" Jul 10 00:30:19.522577 containerd[1551]: time="2025-07-10T00:30:19.522575073Z" level=info msg="StopPodSandbox for \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\" returns successfully" Jul 10 00:30:19.525400 containerd[1551]: time="2025-07-10T00:30:19.525069592Z" level=info msg="RemovePodSandbox for \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\"" Jul 10 00:30:19.525400 containerd[1551]: time="2025-07-10T00:30:19.525110512Z" level=info msg="Forcibly stopping sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\"" Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.559 [WARNING][6123] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqs9d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3ab91194-b6c2-41a0-9cec-3c4e398dcbbf", ResourceVersion:"1130", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a13665e7ea05883b5677af8fcfb4154c185fbfa9fe355c65237f9cb6ea4d2e64", Pod:"csi-node-driver-qqs9d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ca8ff2e3a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.560 [INFO][6123] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.560 [INFO][6123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" iface="eth0" netns="" Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.560 [INFO][6123] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.560 [INFO][6123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.580 [INFO][6131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" HandleID="k8s-pod-network.d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.580 [INFO][6131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.580 [INFO][6131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.589 [WARNING][6131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" HandleID="k8s-pod-network.d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.589 [INFO][6131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" HandleID="k8s-pod-network.d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Workload="localhost-k8s-csi--node--driver--qqs9d-eth0" Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.591 [INFO][6131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.594140 containerd[1551]: 2025-07-10 00:30:19.592 [INFO][6123] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f" Jul 10 00:30:19.594557 containerd[1551]: time="2025-07-10T00:30:19.594164883Z" level=info msg="TearDown network for sandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\" successfully" Jul 10 00:30:19.597272 containerd[1551]: time="2025-07-10T00:30:19.597223202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:30:19.597352 containerd[1551]: time="2025-07-10T00:30:19.597322562Z" level=info msg="RemovePodSandbox \"d2052c294d16d136ba95d8fa7a0c087b64e8a6bc6d9a602ced627e7aa3a79f7f\" returns successfully" Jul 10 00:30:19.597866 containerd[1551]: time="2025-07-10T00:30:19.597846801Z" level=info msg="StopPodSandbox for \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\"" Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.635 [WARNING][6149] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" WorkloadEndpoint="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.635 [INFO][6149] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.635 [INFO][6149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" iface="eth0" netns="" Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.635 [INFO][6149] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.635 [INFO][6149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.653 [INFO][6157] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" HandleID="k8s-pod-network.0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Workload="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.653 [INFO][6157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.653 [INFO][6157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.662 [WARNING][6157] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" HandleID="k8s-pod-network.0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Workload="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.662 [INFO][6157] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" HandleID="k8s-pod-network.0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Workload="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.663 [INFO][6157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.666521 containerd[1551]: 2025-07-10 00:30:19.664 [INFO][6149] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:30:19.666952 containerd[1551]: time="2025-07-10T00:30:19.666564893Z" level=info msg="TearDown network for sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\" successfully" Jul 10 00:30:19.666952 containerd[1551]: time="2025-07-10T00:30:19.666608253Z" level=info msg="StopPodSandbox for \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\" returns successfully" Jul 10 00:30:19.667099 containerd[1551]: time="2025-07-10T00:30:19.667010373Z" level=info msg="RemovePodSandbox for \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\"" Jul 10 00:30:19.667099 containerd[1551]: time="2025-07-10T00:30:19.667043613Z" level=info msg="Forcibly stopping sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\"" Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.706 [WARNING][6174] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" WorkloadEndpoint="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.706 [INFO][6174] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.706 [INFO][6174] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" iface="eth0" netns="" Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.706 [INFO][6174] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.707 [INFO][6174] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.729 [INFO][6182] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" HandleID="k8s-pod-network.0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Workload="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.730 [INFO][6182] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.730 [INFO][6182] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.739 [WARNING][6182] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" HandleID="k8s-pod-network.0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Workload="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.739 [INFO][6182] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" HandleID="k8s-pod-network.0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Workload="localhost-k8s-whisker--7969848dbb--st4jn-eth0" Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.741 [INFO][6182] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.745217 containerd[1551]: 2025-07-10 00:30:19.743 [INFO][6174] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c" Jul 10 00:30:19.745635 containerd[1551]: time="2025-07-10T00:30:19.745238100Z" level=info msg="TearDown network for sandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\" successfully" Jul 10 00:30:19.750213 containerd[1551]: time="2025-07-10T00:30:19.750165938Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:30:19.750304 containerd[1551]: time="2025-07-10T00:30:19.750237098Z" level=info msg="RemovePodSandbox \"0252f966471961a52926e5c000ef36a0231144311333d6e9baa7f91eec76f87c\" returns successfully" Jul 10 00:30:19.750674 containerd[1551]: time="2025-07-10T00:30:19.750647178Z" level=info msg="StopPodSandbox for \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\"" Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.781 [WARNING][6200] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3856e87c-f471-4d8a-8a66-b6670b2d88cd", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47", Pod:"coredns-7c65d6cfc9-wxx7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1278d42898b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.781 [INFO][6200] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.781 [INFO][6200] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" iface="eth0" netns="" Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.781 [INFO][6200] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.781 [INFO][6200] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.799 [INFO][6209] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" HandleID="k8s-pod-network.15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.799 [INFO][6209] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.799 [INFO][6209] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.808 [WARNING][6209] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" HandleID="k8s-pod-network.15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.808 [INFO][6209] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" HandleID="k8s-pod-network.15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.809 [INFO][6209] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.813048 containerd[1551]: 2025-07-10 00:30:19.811 [INFO][6200] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:19.813048 containerd[1551]: time="2025-07-10T00:30:19.813014272Z" level=info msg="TearDown network for sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\" successfully" Jul 10 00:30:19.813048 containerd[1551]: time="2025-07-10T00:30:19.813040752Z" level=info msg="StopPodSandbox for \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\" returns successfully" Jul 10 00:30:19.814283 containerd[1551]: time="2025-07-10T00:30:19.814210911Z" level=info msg="RemovePodSandbox for \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\"" Jul 10 00:30:19.814283 containerd[1551]: time="2025-07-10T00:30:19.814282431Z" level=info msg="Forcibly stopping sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\"" Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.846 [WARNING][6226] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3856e87c-f471-4d8a-8a66-b6670b2d88cd", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"625d238a7f42459c02edbdf8bd6f90ecbfff77c2543ffa652d9489b41d874d47", Pod:"coredns-7c65d6cfc9-wxx7d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1278d42898b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.846 [INFO][6226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.846 [INFO][6226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" iface="eth0" netns="" Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.846 [INFO][6226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.846 [INFO][6226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.866 [INFO][6235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" HandleID="k8s-pod-network.15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.867 [INFO][6235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.867 [INFO][6235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.875 [WARNING][6235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" HandleID="k8s-pod-network.15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.875 [INFO][6235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" HandleID="k8s-pod-network.15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Workload="localhost-k8s-coredns--7c65d6cfc9--wxx7d-eth0" Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.876 [INFO][6235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.880092 containerd[1551]: 2025-07-10 00:30:19.878 [INFO][6226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8" Jul 10 00:30:19.880527 containerd[1551]: time="2025-07-10T00:30:19.880125284Z" level=info msg="TearDown network for sandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\" successfully" Jul 10 00:30:19.883376 containerd[1551]: time="2025-07-10T00:30:19.883305843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:30:19.883454 containerd[1551]: time="2025-07-10T00:30:19.883398723Z" level=info msg="RemovePodSandbox \"15a14e260d44ccb6976673aa83af949db4c590b9ba122828a9b179886c2377e8\" returns successfully" Jul 10 00:30:19.883877 containerd[1551]: time="2025-07-10T00:30:19.883842403Z" level=info msg="StopPodSandbox for \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\"" Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.918 [WARNING][6252] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0", GenerateName:"calico-kube-controllers-7c98c58b9b-", Namespace:"calico-system", SelfLink:"", UID:"3a75ce03-7b22-4075-a20a-956c07a61ee9", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c98c58b9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb", Pod:"calico-kube-controllers-7c98c58b9b-vxtxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali38e15ba6e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.918 [INFO][6252] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.918 [INFO][6252] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" iface="eth0" netns="" Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.918 [INFO][6252] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.918 [INFO][6252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.935 [INFO][6261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" HandleID="k8s-pod-network.37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.935 [INFO][6261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.935 [INFO][6261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.947 [WARNING][6261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" HandleID="k8s-pod-network.37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.947 [INFO][6261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" HandleID="k8s-pod-network.37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.949 [INFO][6261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:19.952285 containerd[1551]: 2025-07-10 00:30:19.950 [INFO][6252] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:19.952906 containerd[1551]: time="2025-07-10T00:30:19.952324334Z" level=info msg="TearDown network for sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\" successfully" Jul 10 00:30:19.952906 containerd[1551]: time="2025-07-10T00:30:19.952353334Z" level=info msg="StopPodSandbox for \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\" returns successfully" Jul 10 00:30:19.952906 containerd[1551]: time="2025-07-10T00:30:19.952774214Z" level=info msg="RemovePodSandbox for \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\"" Jul 10 00:30:19.952906 containerd[1551]: time="2025-07-10T00:30:19.952802534Z" level=info msg="Forcibly stopping sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\"" Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:19.986 [WARNING][6279] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0", GenerateName:"calico-kube-controllers-7c98c58b9b-", Namespace:"calico-system", SelfLink:"", UID:"3a75ce03-7b22-4075-a20a-956c07a61ee9", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c98c58b9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b1c0e0ef8907f590900b72d74d3345100f76f7364e438441fafc9c93d9a22cb", Pod:"calico-kube-controllers-7c98c58b9b-vxtxm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali38e15ba6e2d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:19.986 [INFO][6279] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:19.986 [INFO][6279] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" iface="eth0" netns="" Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:19.987 [INFO][6279] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:19.987 [INFO][6279] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:20.005 [INFO][6288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" HandleID="k8s-pod-network.37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:20.006 [INFO][6288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:20.006 [INFO][6288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:20.014 [WARNING][6288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" HandleID="k8s-pod-network.37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:20.014 [INFO][6288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" HandleID="k8s-pod-network.37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Workload="localhost-k8s-calico--kube--controllers--7c98c58b9b--vxtxm-eth0" Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:20.016 [INFO][6288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:20.020067 containerd[1551]: 2025-07-10 00:30:20.018 [INFO][6279] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332" Jul 10 00:30:20.020612 containerd[1551]: time="2025-07-10T00:30:20.020106306Z" level=info msg="TearDown network for sandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\" successfully" Jul 10 00:30:20.023421 containerd[1551]: time="2025-07-10T00:30:20.023373865Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:30:20.023518 containerd[1551]: time="2025-07-10T00:30:20.023449625Z" level=info msg="RemovePodSandbox \"37822e2af5c822a18cf483527efb596a20ef9923c86548ce92ed216fb37e2332\" returns successfully" Jul 10 00:30:20.024230 containerd[1551]: time="2025-07-10T00:30:20.023913265Z" level=info msg="StopPodSandbox for \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\"" Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.056 [WARNING][6305] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0", GenerateName:"calico-apiserver-5d465fcf7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"09a904e3-2a27-4aa7-afe0-ae11924a0f3d", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465fcf7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e", Pod:"calico-apiserver-5d465fcf7d-bpp6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali076b5175f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.057 [INFO][6305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.057 [INFO][6305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" iface="eth0" netns="" Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.057 [INFO][6305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.057 [INFO][6305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.076 [INFO][6314] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" HandleID="k8s-pod-network.f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.076 [INFO][6314] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.076 [INFO][6314] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.086 [WARNING][6314] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" HandleID="k8s-pod-network.f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.086 [INFO][6314] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" HandleID="k8s-pod-network.f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.088 [INFO][6314] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:20.091983 containerd[1551]: 2025-07-10 00:30:20.090 [INFO][6305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:20.092443 containerd[1551]: time="2025-07-10T00:30:20.092022158Z" level=info msg="TearDown network for sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\" successfully" Jul 10 00:30:20.092443 containerd[1551]: time="2025-07-10T00:30:20.092049798Z" level=info msg="StopPodSandbox for \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\" returns successfully" Jul 10 00:30:20.092529 containerd[1551]: time="2025-07-10T00:30:20.092489038Z" level=info msg="RemovePodSandbox for \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\"" Jul 10 00:30:20.092566 containerd[1551]: time="2025-07-10T00:30:20.092528398Z" level=info msg="Forcibly stopping sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\"" Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.128 [WARNING][6331] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0", GenerateName:"calico-apiserver-5d465fcf7d-", Namespace:"calico-apiserver", SelfLink:"", UID:"09a904e3-2a27-4aa7-afe0-ae11924a0f3d", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 0, 29, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d465fcf7d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5a779b1a4d64ce66d6b70513762f272949516c7a8e131eb0071cfaf9658fd09e", Pod:"calico-apiserver-5d465fcf7d-bpp6r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali076b5175f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.128 [INFO][6331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.128 [INFO][6331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" iface="eth0" netns="" Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.128 [INFO][6331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.128 [INFO][6331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.149 [INFO][6340] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" HandleID="k8s-pod-network.f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.149 [INFO][6340] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.149 [INFO][6340] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.158 [WARNING][6340] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" HandleID="k8s-pod-network.f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.158 [INFO][6340] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" HandleID="k8s-pod-network.f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Workload="localhost-k8s-calico--apiserver--5d465fcf7d--bpp6r-eth0" Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.159 [INFO][6340] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 00:30:20.163233 containerd[1551]: 2025-07-10 00:30:20.161 [INFO][6331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7" Jul 10 00:30:20.163735 containerd[1551]: time="2025-07-10T00:30:20.163275010Z" level=info msg="TearDown network for sandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\" successfully" Jul 10 00:30:20.166417 containerd[1551]: time="2025-07-10T00:30:20.166337809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 10 00:30:20.166497 containerd[1551]: time="2025-07-10T00:30:20.166443529Z" level=info msg="RemovePodSandbox \"f6b9b17f5a64cbbbf7da5b922ce0024d283f80f0e2394f76c73ea3cca7e88eb7\" returns successfully" Jul 10 00:30:21.982776 systemd[1]: Started sshd@13-10.0.0.65:22-10.0.0.1:48680.service - OpenSSH per-connection server daemon (10.0.0.1:48680). Jul 10 00:30:22.027701 sshd[6369]: Accepted publickey for core from 10.0.0.1 port 48680 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:22.029473 sshd[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:22.033647 systemd-logind[1524]: New session 14 of user core. Jul 10 00:30:22.046656 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 10 00:30:22.269982 sshd[6369]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:22.273396 systemd[1]: sshd@13-10.0.0.65:22-10.0.0.1:48680.service: Deactivated successfully. Jul 10 00:30:22.275373 systemd-logind[1524]: Session 14 logged out. Waiting for processes to exit. Jul 10 00:30:22.275788 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 00:30:22.277540 systemd-logind[1524]: Removed session 14. Jul 10 00:30:25.081266 kubelet[2626]: I0710 00:30:25.081208 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:30:26.189318 kubelet[2626]: I0710 00:30:26.189271 2626 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 00:30:27.281640 systemd[1]: Started sshd@14-10.0.0.65:22-10.0.0.1:45128.service - OpenSSH per-connection server daemon (10.0.0.1:45128). Jul 10 00:30:27.313501 sshd[6399]: Accepted publickey for core from 10.0.0.1 port 45128 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:27.314860 sshd[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:27.318854 systemd-logind[1524]: New session 15 of user core. Jul 10 00:30:27.329703 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 10 00:30:27.503074 sshd[6399]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:27.507478 systemd-logind[1524]: Session 15 logged out. Waiting for processes to exit. Jul 10 00:30:27.507741 systemd[1]: sshd@14-10.0.0.65:22-10.0.0.1:45128.service: Deactivated successfully. Jul 10 00:30:27.509648 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 00:30:27.510131 systemd-logind[1524]: Removed session 15. Jul 10 00:30:32.510585 systemd[1]: Started sshd@15-10.0.0.65:22-10.0.0.1:59068.service - OpenSSH per-connection server daemon (10.0.0.1:59068). Jul 10 00:30:32.548350 sshd[6436]: Accepted publickey for core from 10.0.0.1 port 59068 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:32.550912 sshd[6436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:32.555143 systemd-logind[1524]: New session 16 of user core. Jul 10 00:30:32.562665 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 10 00:30:32.802006 sshd[6436]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:32.809612 systemd[1]: Started sshd@16-10.0.0.65:22-10.0.0.1:59080.service - OpenSSH per-connection server daemon (10.0.0.1:59080). Jul 10 00:30:32.810017 systemd[1]: sshd@15-10.0.0.65:22-10.0.0.1:59068.service: Deactivated successfully. Jul 10 00:30:32.815954 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 00:30:32.818443 systemd-logind[1524]: Session 16 logged out. Waiting for processes to exit. Jul 10 00:30:32.819605 systemd-logind[1524]: Removed session 16. Jul 10 00:30:32.843981 sshd[6449]: Accepted publickey for core from 10.0.0.1 port 59080 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:32.848307 sshd[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:32.855288 systemd-logind[1524]: New session 17 of user core. Jul 10 00:30:32.861698 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 10 00:30:33.103536 sshd[6449]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:33.112807 systemd[1]: Started sshd@17-10.0.0.65:22-10.0.0.1:59088.service - OpenSSH per-connection server daemon (10.0.0.1:59088). Jul 10 00:30:33.113216 systemd[1]: sshd@16-10.0.0.65:22-10.0.0.1:59080.service: Deactivated successfully. Jul 10 00:30:33.118750 systemd-logind[1524]: Session 17 logged out. Waiting for processes to exit. Jul 10 00:30:33.118943 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 00:30:33.121143 systemd-logind[1524]: Removed session 17. Jul 10 00:30:33.154658 sshd[6462]: Accepted publickey for core from 10.0.0.1 port 59088 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:33.156050 sshd[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:33.159955 systemd-logind[1524]: New session 18 of user core. Jul 10 00:30:33.169653 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 10 00:30:34.993430 sshd[6462]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:35.005790 systemd[1]: Started sshd@18-10.0.0.65:22-10.0.0.1:59102.service - OpenSSH per-connection server daemon (10.0.0.1:59102). Jul 10 00:30:35.006230 systemd[1]: sshd@17-10.0.0.65:22-10.0.0.1:59088.service: Deactivated successfully. Jul 10 00:30:35.013071 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 00:30:35.017050 systemd-logind[1524]: Session 18 logged out. Waiting for processes to exit. Jul 10 00:30:35.023669 systemd-logind[1524]: Removed session 18. Jul 10 00:30:35.049348 sshd[6482]: Accepted publickey for core from 10.0.0.1 port 59102 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:35.050846 sshd[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:35.056107 systemd-logind[1524]: New session 19 of user core. Jul 10 00:30:35.065745 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 10 00:30:35.491542 sshd[6482]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:35.498696 systemd[1]: Started sshd@19-10.0.0.65:22-10.0.0.1:59112.service - OpenSSH per-connection server daemon (10.0.0.1:59112). Jul 10 00:30:35.499119 systemd[1]: sshd@18-10.0.0.65:22-10.0.0.1:59102.service: Deactivated successfully. Jul 10 00:30:35.502868 systemd-logind[1524]: Session 19 logged out. Waiting for processes to exit. Jul 10 00:30:35.503081 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 00:30:35.507594 systemd-logind[1524]: Removed session 19. Jul 10 00:30:35.535740 sshd[6497]: Accepted publickey for core from 10.0.0.1 port 59112 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:35.537111 sshd[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:35.541387 systemd-logind[1524]: New session 20 of user core. Jul 10 00:30:35.550677 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 10 00:30:35.697645 sshd[6497]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:35.702573 systemd[1]: sshd@19-10.0.0.65:22-10.0.0.1:59112.service: Deactivated successfully. Jul 10 00:30:35.706669 systemd-logind[1524]: Session 20 logged out. Waiting for processes to exit. Jul 10 00:30:35.707233 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 00:30:35.709887 systemd-logind[1524]: Removed session 20. Jul 10 00:30:38.931396 kubelet[2626]: E0710 00:30:38.923685 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:40.708612 systemd[1]: Started sshd@20-10.0.0.65:22-10.0.0.1:59126.service - OpenSSH per-connection server daemon (10.0.0.1:59126). Jul 10 00:30:40.740117 sshd[6539]: Accepted publickey for core from 10.0.0.1 port 59126 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:40.741572 sshd[6539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:40.745772 systemd-logind[1524]: New session 21 of user core. Jul 10 00:30:40.752726 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 10 00:30:40.867506 sshd[6539]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:40.871744 systemd[1]: sshd@20-10.0.0.65:22-10.0.0.1:59126.service: Deactivated successfully. Jul 10 00:30:40.874052 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 00:30:40.875476 systemd-logind[1524]: Session 21 logged out. Waiting for processes to exit. Jul 10 00:30:40.876531 systemd-logind[1524]: Removed session 21. Jul 10 00:30:45.882267 systemd[1]: Started sshd@21-10.0.0.65:22-10.0.0.1:57386.service - OpenSSH per-connection server daemon (10.0.0.1:57386). Jul 10 00:30:45.916882 sshd[6563]: Accepted publickey for core from 10.0.0.1 port 57386 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:45.918228 sshd[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:45.924130 systemd-logind[1524]: New session 22 of user core. Jul 10 00:30:45.931681 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 10 00:30:46.114883 sshd[6563]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:46.118915 systemd-logind[1524]: Session 22 logged out. Waiting for processes to exit. Jul 10 00:30:46.119593 systemd[1]: sshd@21-10.0.0.65:22-10.0.0.1:57386.service: Deactivated successfully. Jul 10 00:30:46.122005 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 00:30:46.123549 systemd-logind[1524]: Removed session 22. Jul 10 00:30:47.923155 kubelet[2626]: E0710 00:30:47.923101 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:49.922777 kubelet[2626]: E0710 00:30:49.922738 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 10 00:30:51.124696 systemd[1]: Started sshd@22-10.0.0.65:22-10.0.0.1:57400.service - OpenSSH per-connection server daemon (10.0.0.1:57400). Jul 10 00:30:51.168404 sshd[6601]: Accepted publickey for core from 10.0.0.1 port 57400 ssh2: RSA SHA256:6ip12YJRxsd1pENU5FRdLlGNHGMqkKNn+D5B7RGN6xs Jul 10 00:30:51.169134 sshd[6601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 10 00:30:51.173457 systemd-logind[1524]: New session 23 of user core. Jul 10 00:30:51.179687 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 10 00:30:51.305454 sshd[6601]: pam_unix(sshd:session): session closed for user core Jul 10 00:30:51.314428 systemd[1]: sshd@22-10.0.0.65:22-10.0.0.1:57400.service: Deactivated successfully. Jul 10 00:30:51.316531 systemd-logind[1524]: Session 23 logged out. Waiting for processes to exit. Jul 10 00:30:51.319224 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 00:30:51.321551 systemd-logind[1524]: Removed session 23. Jul 10 00:30:52.923027 kubelet[2626]: E0710 00:30:52.922939 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"