Aug 13 00:07:17.974545 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 13 00:07:17.974565 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 13 00:07:17.974575 kernel: KASLR enabled Aug 13 00:07:17.974581 kernel: efi: EFI v2.7 by EDK II Aug 13 00:07:17.974587 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 13 00:07:17.974592 kernel: random: crng init done Aug 13 00:07:17.974599 kernel: ACPI: Early table checksum verification disabled Aug 13 00:07:17.974605 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 13 00:07:17.974611 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 13 00:07:17.974619 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:07:17.974625 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:07:17.974631 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:07:17.974637 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:07:17.974643 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:07:17.974651 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:07:17.974658 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:07:17.974665 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:07:17.974671 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 00:07:17.974677 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 13 00:07:17.974684 kernel: NUMA: Failed to initialise from firmware Aug 13 00:07:17.974690 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:07:17.974699 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Aug 13 00:07:17.974705 kernel: Zone ranges: Aug 13 00:07:17.974711 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:07:17.974718 kernel: DMA32 empty Aug 13 00:07:17.974725 kernel: Normal empty Aug 13 00:07:17.974731 kernel: Movable zone start for each node Aug 13 00:07:17.974737 kernel: Early memory node ranges Aug 13 00:07:17.974744 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 13 00:07:17.974750 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 13 00:07:17.974757 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 13 00:07:17.974763 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 13 00:07:17.974769 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 13 00:07:17.974775 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 13 00:07:17.974782 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 13 00:07:17.974788 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 13 00:07:17.974795 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 13 00:07:17.974802 kernel: psci: probing for conduit method from ACPI. Aug 13 00:07:17.974809 kernel: psci: PSCIv1.1 detected in firmware. Aug 13 00:07:17.974816 kernel: psci: Using standard PSCI v0.2 function IDs Aug 13 00:07:17.974825 kernel: psci: Trusted OS migration not required Aug 13 00:07:17.974832 kernel: psci: SMC Calling Convention v1.1 Aug 13 00:07:17.974839 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 13 00:07:17.974847 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 13 00:07:17.974855 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 13 00:07:17.974861 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 13 00:07:17.974869 kernel: Detected PIPT I-cache on CPU0 Aug 13 00:07:17.974875 kernel: CPU features: detected: GIC system register CPU interface Aug 13 00:07:17.974883 kernel: CPU features: detected: Hardware dirty bit management Aug 13 00:07:17.974890 kernel: CPU features: detected: Spectre-v4 Aug 13 00:07:17.974897 kernel: CPU features: detected: Spectre-BHB Aug 13 00:07:17.974904 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 13 00:07:17.974910 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 13 00:07:17.974919 kernel: CPU features: detected: ARM erratum 1418040 Aug 13 00:07:17.974926 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 13 00:07:17.974932 kernel: alternatives: applying boot alternatives Aug 13 00:07:17.974940 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:07:17.974947 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 00:07:17.974954 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 00:07:17.974961 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 00:07:17.974968 kernel: Fallback order for Node 0: 0 Aug 13 00:07:17.974975 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 13 00:07:17.974981 kernel: Policy zone: DMA Aug 13 00:07:17.974988 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 00:07:17.974996 kernel: software IO TLB: area num 4. Aug 13 00:07:17.975003 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 13 00:07:17.975010 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Aug 13 00:07:17.975017 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 13 00:07:17.975024 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 00:07:17.975031 kernel: rcu: RCU event tracing is enabled. Aug 13 00:07:17.975038 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 13 00:07:17.975045 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 00:07:17.975052 kernel: Tracing variant of Tasks RCU enabled. Aug 13 00:07:17.975059 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 00:07:17.975148 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 13 00:07:17.975159 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 13 00:07:17.975166 kernel: GICv3: 256 SPIs implemented Aug 13 00:07:17.975173 kernel: GICv3: 0 Extended SPIs implemented Aug 13 00:07:17.975180 kernel: Root IRQ handler: gic_handle_irq Aug 13 00:07:17.975186 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 13 00:07:17.975193 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 13 00:07:17.975200 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 13 00:07:17.975207 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Aug 13 00:07:17.975213 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Aug 13 00:07:17.975220 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 13 00:07:17.975227 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 13 00:07:17.975234 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 00:07:17.975242 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:07:17.975249 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 13 00:07:17.975256 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 13 00:07:17.975263 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 13 00:07:17.975269 kernel: arm-pv: using stolen time PV Aug 13 00:07:17.975277 kernel: Console: colour dummy device 80x25 Aug 13 00:07:17.975283 kernel: ACPI: Core revision 20230628 Aug 13 00:07:17.975291 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 13 00:07:17.975298 kernel: pid_max: default: 32768 minimum: 301 Aug 13 00:07:17.975304 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 00:07:17.975313 kernel: landlock: Up and running. Aug 13 00:07:17.975319 kernel: SELinux: Initializing. Aug 13 00:07:17.975326 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:07:17.975333 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 00:07:17.975340 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:07:17.975347 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 13 00:07:17.975354 kernel: rcu: Hierarchical SRCU implementation. Aug 13 00:07:17.975362 kernel: rcu: Max phase no-delay instances is 400. Aug 13 00:07:17.975369 kernel: Platform MSI: ITS@0x8080000 domain created Aug 13 00:07:17.975377 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 13 00:07:17.975384 kernel: Remapping and enabling EFI services. Aug 13 00:07:17.975391 kernel: smp: Bringing up secondary CPUs ... Aug 13 00:07:17.975397 kernel: Detected PIPT I-cache on CPU1 Aug 13 00:07:17.975404 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 13 00:07:17.975411 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 13 00:07:17.975418 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:07:17.975425 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 13 00:07:17.975432 kernel: Detected PIPT I-cache on CPU2 Aug 13 00:07:17.975439 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 13 00:07:17.975462 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 13 00:07:17.975469 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:07:17.975482 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 13 00:07:17.975490 kernel: Detected PIPT I-cache on CPU3 Aug 13 00:07:17.975498 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 13 00:07:17.975505 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 13 00:07:17.975512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 13 00:07:17.975519 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 13 00:07:17.975527 kernel: smp: Brought up 1 node, 4 CPUs Aug 13 00:07:17.975535 kernel: SMP: Total of 4 processors activated. Aug 13 00:07:17.975543 kernel: CPU features: detected: 32-bit EL0 Support Aug 13 00:07:17.975550 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 13 00:07:17.975557 kernel: CPU features: detected: Common not Private translations Aug 13 00:07:17.975564 kernel: CPU features: detected: CRC32 instructions Aug 13 00:07:17.975571 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 13 00:07:17.975579 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 13 00:07:17.975586 kernel: CPU features: detected: LSE atomic instructions Aug 13 00:07:17.975595 kernel: CPU features: detected: Privileged Access Never Aug 13 00:07:17.975602 kernel: CPU features: detected: RAS Extension Support Aug 13 00:07:17.975609 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 13 00:07:17.975617 kernel: CPU: All CPU(s) started at EL1 Aug 13 00:07:17.975624 kernel: alternatives: applying system-wide alternatives Aug 13 00:07:17.975631 kernel: devtmpfs: initialized Aug 13 00:07:17.975638 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 00:07:17.975646 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 13 00:07:17.975653 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 00:07:17.975662 kernel: SMBIOS 3.0.0 present. Aug 13 00:07:17.975669 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 13 00:07:17.975676 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 00:07:17.975683 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 13 00:07:17.975691 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 13 00:07:17.975698 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 13 00:07:17.975705 kernel: audit: initializing netlink subsys (disabled) Aug 13 00:07:17.975712 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Aug 13 00:07:17.975721 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 00:07:17.975728 kernel: cpuidle: using governor menu Aug 13 00:07:17.975735 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 13 00:07:17.975743 kernel: ASID allocator initialised with 32768 entries Aug 13 00:07:17.975750 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 00:07:17.975757 kernel: Serial: AMBA PL011 UART driver Aug 13 00:07:17.975764 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 13 00:07:17.975772 kernel: Modules: 0 pages in range for non-PLT usage Aug 13 00:07:17.975779 kernel: Modules: 509008 pages in range for PLT usage Aug 13 00:07:17.975786 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 00:07:17.975795 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 00:07:17.975802 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 13 00:07:17.975809 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 13 00:07:17.975816 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 00:07:17.975823 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 00:07:17.975831 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 13 00:07:17.975838 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 13 00:07:17.975845 kernel: ACPI: Added _OSI(Module Device) Aug 13 00:07:17.975852 kernel: ACPI: Added _OSI(Processor Device) Aug 13 00:07:17.975861 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 00:07:17.975868 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 00:07:17.975875 kernel: ACPI: Interpreter enabled Aug 13 00:07:17.975882 kernel: ACPI: Using GIC for interrupt routing Aug 13 00:07:17.975890 kernel: ACPI: MCFG table detected, 1 entries Aug 13 00:07:17.975897 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 13 00:07:17.975904 kernel: printk: console [ttyAMA0] enabled Aug 13 00:07:17.975912 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 00:07:17.976055 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 00:07:17.976167 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 00:07:17.976239 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 00:07:17.976323 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 13 00:07:17.976390 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 13 00:07:17.976400 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 13 00:07:17.976408 kernel: PCI host bridge to bus 0000:00 Aug 13 00:07:17.976482 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 13 00:07:17.976568 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 13 00:07:17.976630 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 13 00:07:17.976689 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 00:07:17.976775 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 13 00:07:17.976852 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 13 00:07:17.976973 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 13 00:07:17.977048 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 13 00:07:17.977152 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:07:17.977229 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 13 00:07:17.977297 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 13 00:07:17.977365 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 13 00:07:17.977428 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 13 00:07:17.977489 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 13 00:07:17.977558 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 13 00:07:17.977568 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 13 00:07:17.977576 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 13 00:07:17.977583 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 13 00:07:17.977591 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 13 00:07:17.977598 kernel: iommu: Default domain type: Translated Aug 13 00:07:17.977606 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 13 00:07:17.977613 kernel: efivars: Registered efivars operations Aug 13 00:07:17.977623 kernel: vgaarb: loaded Aug 13 00:07:17.977630 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 13 00:07:17.977637 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 00:07:17.977645 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 00:07:17.977652 kernel: pnp: PnP ACPI init Aug 13 00:07:17.977726 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 13 00:07:17.977737 kernel: pnp: PnP ACPI: found 1 devices Aug 13 00:07:17.977744 kernel: NET: Registered PF_INET protocol family Aug 13 00:07:17.977752 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 00:07:17.977762 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 00:07:17.977769 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 00:07:17.977776 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 00:07:17.977784 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 00:07:17.977791 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 00:07:17.977799 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:07:17.977806 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 00:07:17.977814 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 00:07:17.977823 kernel: PCI: CLS 0 bytes, default 64 Aug 13 00:07:17.977831 kernel: kvm [1]: HYP mode not available Aug 13 00:07:17.977838 kernel: Initialise system trusted keyrings Aug 13 00:07:17.977846 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 00:07:17.977854 kernel: Key type asymmetric registered Aug 13 00:07:17.977861 kernel: Asymmetric key parser 'x509' registered Aug 13 00:07:17.977869 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 00:07:17.977876 kernel: io scheduler mq-deadline registered Aug 13 00:07:17.977883 kernel: io scheduler kyber registered Aug 13 00:07:17.977891 kernel: io scheduler bfq registered Aug 13 00:07:17.977900 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 13 00:07:17.977908 kernel: ACPI: button: Power Button [PWRB] Aug 13 00:07:17.977916 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 13 00:07:17.978003 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 13 00:07:17.978013 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 00:07:17.978021 kernel: thunder_xcv, ver 1.0 Aug 13 00:07:17.978028 kernel: thunder_bgx, ver 1.0 Aug 13 00:07:17.978035 kernel: nicpf, ver 1.0 Aug 13 00:07:17.978043 kernel: nicvf, ver 1.0 Aug 13 00:07:17.978142 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 13 00:07:17.978210 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-13T00:07:17 UTC (1755043637) Aug 13 00:07:17.978220 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 00:07:17.978228 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 13 00:07:17.978236 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 13 00:07:17.978243 kernel: watchdog: Hard watchdog permanently disabled Aug 13 00:07:17.978251 kernel: NET: Registered PF_INET6 protocol family Aug 13 00:07:17.978258 kernel: Segment Routing with IPv6 Aug 13 00:07:17.978270 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 00:07:17.978277 kernel: NET: Registered PF_PACKET protocol family Aug 13 00:07:17.978285 kernel: Key type dns_resolver registered Aug 13 00:07:17.978292 kernel: registered taskstats version 1 Aug 13 00:07:17.978299 kernel: Loading compiled-in X.509 certificates Aug 13 00:07:17.978306 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 13 00:07:17.978314 kernel: Key type .fscrypt registered Aug 13 00:07:17.978321 kernel: Key type fscrypt-provisioning registered Aug 13 00:07:17.978328 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 00:07:17.978337 kernel: ima: Allocated hash algorithm: sha1 Aug 13 00:07:17.978344 kernel: ima: No architecture policies found Aug 13 00:07:17.978351 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 13 00:07:17.978358 kernel: clk: Disabling unused clocks Aug 13 00:07:17.978366 kernel: Freeing unused kernel memory: 39424K Aug 13 00:07:17.978373 kernel: Run /init as init process Aug 13 00:07:17.978380 kernel: with arguments: Aug 13 00:07:17.978387 kernel: /init Aug 13 00:07:17.978394 kernel: with environment: Aug 13 00:07:17.978403 kernel: HOME=/ Aug 13 00:07:17.978410 kernel: TERM=linux Aug 13 00:07:17.978417 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 00:07:17.978426 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:07:17.978436 systemd[1]: Detected virtualization kvm. Aug 13 00:07:17.978444 systemd[1]: Detected architecture arm64. Aug 13 00:07:17.978452 systemd[1]: Running in initrd. Aug 13 00:07:17.978461 systemd[1]: No hostname configured, using default hostname. Aug 13 00:07:17.978486 systemd[1]: Hostname set to . Aug 13 00:07:17.978494 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:07:17.978502 systemd[1]: Queued start job for default target initrd.target. Aug 13 00:07:17.978510 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:07:17.978518 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:07:17.978527 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 00:07:17.978535 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:07:17.978545 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 00:07:17.978553 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 00:07:17.978563 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 00:07:17.978571 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 00:07:17.978579 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:07:17.978587 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:07:17.978595 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:07:17.978605 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:07:17.978613 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:07:17.978621 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:07:17.978628 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:07:17.978636 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:07:17.978644 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:07:17.978652 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:07:17.978660 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:07:17.978668 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:07:17.978678 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:07:17.978686 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:07:17.978693 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 00:07:17.978701 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:07:17.978709 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 00:07:17.978717 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 00:07:17.978725 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:07:17.978733 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:07:17.978743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:07:17.978751 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 00:07:17.978760 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:07:17.978768 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 00:07:17.978776 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:07:17.978815 systemd-journald[239]: Collecting audit messages is disabled. Aug 13 00:07:17.978835 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:07:17.978844 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:07:17.978853 systemd-journald[239]: Journal started Aug 13 00:07:17.978874 systemd-journald[239]: Runtime Journal (/run/log/journal/8a4e80e1a456481392c9d3178d21d308) is 5.9M, max 47.3M, 41.4M free. Aug 13 00:07:17.969142 systemd-modules-load[240]: Inserted module 'overlay' Aug 13 00:07:17.983098 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:07:17.983131 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 00:07:17.987593 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:07:17.992225 kernel: Bridge firewalling registered Aug 13 00:07:17.987973 systemd-modules-load[240]: Inserted module 'br_netfilter' Aug 13 00:07:17.989434 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:07:17.993618 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:07:17.996328 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:07:17.999057 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:07:18.003202 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:07:18.008481 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 00:07:18.010635 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:07:18.011998 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:07:18.015403 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:07:18.018709 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:07:18.022599 dracut-cmdline[274]: dracut-dracut-053 Aug 13 00:07:18.025876 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 13 00:07:18.048675 systemd-resolved[282]: Positive Trust Anchors: Aug 13 00:07:18.048695 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:07:18.048726 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:07:18.053385 systemd-resolved[282]: Defaulting to hostname 'linux'. Aug 13 00:07:18.054873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:07:18.058467 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:07:18.104103 kernel: SCSI subsystem initialized Aug 13 00:07:18.109097 kernel: Loading iSCSI transport class v2.0-870. Aug 13 00:07:18.117107 kernel: iscsi: registered transport (tcp) Aug 13 00:07:18.132104 kernel: iscsi: registered transport (qla4xxx) Aug 13 00:07:18.132146 kernel: QLogic iSCSI HBA Driver Aug 13 00:07:18.178570 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 00:07:18.186250 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 00:07:18.205586 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 00:07:18.205647 kernel: device-mapper: uevent: version 1.0.3 Aug 13 00:07:18.206798 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 00:07:18.259113 kernel: raid6: neonx8 gen() 15783 MB/s Aug 13 00:07:18.276115 kernel: raid6: neonx4 gen() 15653 MB/s Aug 13 00:07:18.293107 kernel: raid6: neonx2 gen() 13243 MB/s Aug 13 00:07:18.310104 kernel: raid6: neonx1 gen() 10491 MB/s Aug 13 00:07:18.327119 kernel: raid6: int64x8 gen() 6952 MB/s Aug 13 00:07:18.344093 kernel: raid6: int64x4 gen() 7341 MB/s Aug 13 00:07:18.361126 kernel: raid6: int64x2 gen() 6127 MB/s Aug 13 00:07:18.378286 kernel: raid6: int64x1 gen() 5058 MB/s Aug 13 00:07:18.378366 kernel: raid6: using algorithm neonx8 gen() 15783 MB/s Aug 13 00:07:18.397259 kernel: raid6: .... xor() 12658 MB/s, rmw enabled Aug 13 00:07:18.397328 kernel: raid6: using neon recovery algorithm Aug 13 00:07:18.412156 kernel: xor: measuring software checksum speed Aug 13 00:07:18.413585 kernel: 8regs : 413 MB/sec Aug 13 00:07:18.413618 kernel: 32regs : 19613 MB/sec Aug 13 00:07:18.414218 kernel: arm64_neon : 26883 MB/sec Aug 13 00:07:18.414249 kernel: xor: using function: arm64_neon (26883 MB/sec) Aug 13 00:07:18.485107 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 00:07:18.507272 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:07:18.523326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:07:18.539944 systemd-udevd[460]: Using default interface naming scheme 'v255'. Aug 13 00:07:18.546766 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:07:18.562280 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 00:07:18.585087 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Aug 13 00:07:18.630128 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:07:18.640260 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:07:18.699279 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:07:18.711632 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 00:07:18.743900 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 00:07:18.747605 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:07:18.749022 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:07:18.751302 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:07:18.760360 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 00:07:18.766728 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 13 00:07:18.767366 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 13 00:07:18.774391 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 00:07:18.774429 kernel: GPT:9289727 != 19775487 Aug 13 00:07:18.774440 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 00:07:18.774450 kernel: GPT:9289727 != 19775487 Aug 13 00:07:18.775480 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 00:07:18.775493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:07:18.781601 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:07:18.781716 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:07:18.784785 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:07:18.785889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:07:18.786031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:07:18.788250 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:07:18.802417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:07:18.804526 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:07:18.816361 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (507) Aug 13 00:07:18.816417 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (522) Aug 13 00:07:18.823612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:07:18.830033 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 00:07:18.838963 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 00:07:18.843883 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:07:18.848296 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 00:07:18.849626 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 00:07:18.863260 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 00:07:18.865521 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 00:07:18.871320 disk-uuid[551]: Primary Header is updated. Aug 13 00:07:18.871320 disk-uuid[551]: Secondary Entries is updated. Aug 13 00:07:18.871320 disk-uuid[551]: Secondary Header is updated. Aug 13 00:07:18.875101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:07:18.921329 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:07:19.938101 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 00:07:19.938320 disk-uuid[552]: The operation has completed successfully. Aug 13 00:07:19.971281 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 00:07:19.971378 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 00:07:19.983262 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 00:07:19.987251 sh[570]: Success Aug 13 00:07:20.007110 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 13 00:07:20.065645 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 00:07:20.067915 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 00:07:20.069846 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 00:07:20.080344 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 13 00:07:20.080410 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:07:20.080427 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 00:07:20.081523 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 00:07:20.083435 kernel: BTRFS info (device dm-0): using free space tree Aug 13 00:07:20.088454 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 00:07:20.089551 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 00:07:20.105241 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 00:07:20.107018 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 00:07:20.115203 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:07:20.115243 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:07:20.115254 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:07:20.121557 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:07:20.134592 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 00:07:20.136481 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:07:20.146224 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 00:07:20.160305 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 00:07:20.213755 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:07:20.221298 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:07:20.248286 systemd-networkd[753]: lo: Link UP Aug 13 00:07:20.248298 systemd-networkd[753]: lo: Gained carrier Aug 13 00:07:20.249285 systemd-networkd[753]: Enumeration completed Aug 13 00:07:20.249714 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:07:20.249805 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:07:20.249808 systemd-networkd[753]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:07:20.251235 systemd[1]: Reached target network.target - Network. Aug 13 00:07:20.251557 systemd-networkd[753]: eth0: Link UP Aug 13 00:07:20.251560 systemd-networkd[753]: eth0: Gained carrier Aug 13 00:07:20.251567 systemd-networkd[753]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:07:20.278157 systemd-networkd[753]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:07:20.286145 ignition[668]: Ignition 2.19.0 Aug 13 00:07:20.286157 ignition[668]: Stage: fetch-offline Aug 13 00:07:20.286198 ignition[668]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:07:20.286207 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:07:20.286504 ignition[668]: parsed url from cmdline: "" Aug 13 00:07:20.286507 ignition[668]: no config URL provided Aug 13 00:07:20.286512 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 00:07:20.286519 ignition[668]: no config at "/usr/lib/ignition/user.ign" Aug 13 00:07:20.286544 ignition[668]: op(1): [started] loading QEMU firmware config module Aug 13 00:07:20.286549 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 13 00:07:20.314669 ignition[668]: op(1): [finished] loading QEMU firmware config module Aug 13 00:07:20.360985 ignition[668]: parsing config with SHA512: f93407b51c8302877160528eca4de3bbaa36f49181adcaf08bc4a94f7289fc752a222014a00af4122bb6a770aeeb56b51bc6778196e5c91df5bf4e5b217ad5d6 Aug 13 00:07:20.367930 unknown[668]: fetched base config from "system" Aug 13 00:07:20.367946 unknown[668]: fetched user config from "qemu" Aug 13 00:07:20.368666 ignition[668]: fetch-offline: fetch-offline passed Aug 13 00:07:20.368840 ignition[668]: Ignition finished successfully Aug 13 00:07:20.371762 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:07:20.374446 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 00:07:20.386256 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 00:07:20.398677 ignition[767]: Ignition 2.19.0 Aug 13 00:07:20.398688 ignition[767]: Stage: kargs Aug 13 00:07:20.398882 ignition[767]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:07:20.398891 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:07:20.399876 ignition[767]: kargs: kargs passed Aug 13 00:07:20.399926 ignition[767]: Ignition finished successfully Aug 13 00:07:20.403357 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 00:07:20.422315 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 00:07:20.441683 ignition[775]: Ignition 2.19.0 Aug 13 00:07:20.441699 ignition[775]: Stage: disks Aug 13 00:07:20.442000 ignition[775]: no configs at "/usr/lib/ignition/base.d" Aug 13 00:07:20.442013 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:07:20.444262 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 00:07:20.443187 ignition[775]: disks: disks passed Aug 13 00:07:20.446941 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 00:07:20.443240 ignition[775]: Ignition finished successfully Aug 13 00:07:20.448457 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:07:20.450255 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:07:20.452285 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:07:20.454101 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:07:20.470312 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 00:07:20.485312 systemd-fsck[784]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 13 00:07:20.490198 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 00:07:20.502289 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 00:07:20.557145 kernel: EXT4-fs (vda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 13 00:07:20.557444 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 00:07:20.558874 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 00:07:20.576224 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:07:20.578842 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 00:07:20.579924 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 00:07:20.579971 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 00:07:20.579995 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:07:20.586809 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 00:07:20.589372 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 00:07:20.597474 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (792) Aug 13 00:07:20.597499 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:07:20.597509 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:07:20.597519 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:07:20.597555 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:07:20.599415 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:07:20.652646 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 00:07:20.657148 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Aug 13 00:07:20.661204 initrd-setup-root[830]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 00:07:20.665442 initrd-setup-root[837]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 00:07:20.774634 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 00:07:20.789701 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 00:07:20.792513 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 00:07:20.799097 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:07:20.818685 ignition[905]: INFO : Ignition 2.19.0 Aug 13 00:07:20.819890 ignition[905]: INFO : Stage: mount Aug 13 00:07:20.819890 ignition[905]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:07:20.819890 ignition[905]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:07:20.823804 ignition[905]: INFO : mount: mount passed Aug 13 00:07:20.823804 ignition[905]: INFO : Ignition finished successfully Aug 13 00:07:20.821778 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 00:07:20.825234 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 00:07:20.833274 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 00:07:21.078879 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 00:07:21.095406 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 00:07:21.102094 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (919) Aug 13 00:07:21.104307 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 13 00:07:21.104332 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 13 00:07:21.105087 kernel: BTRFS info (device vda6): using free space tree Aug 13 00:07:21.108099 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 00:07:21.108833 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 00:07:21.133638 ignition[936]: INFO : Ignition 2.19.0 Aug 13 00:07:21.133638 ignition[936]: INFO : Stage: files Aug 13 00:07:21.135564 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:07:21.135564 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:07:21.135564 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Aug 13 00:07:21.139040 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 00:07:21.139040 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 00:07:21.142690 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 00:07:21.144040 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 00:07:21.144040 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 00:07:21.143283 unknown[936]: wrote ssh authorized keys file for user: core Aug 13 00:07:21.148168 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:07:21.148168 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 00:07:21.148168 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:07:21.148168 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 13 00:07:21.209636 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 00:07:21.932385 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 13 00:07:21.932385 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:07:21.938739 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 13 00:07:21.963182 systemd-networkd[753]: eth0: Gained IPv6LL Aug 13 00:07:22.345462 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 00:07:22.799776 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 13 00:07:22.799776 ignition[936]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Aug 13 00:07:22.804892 ignition[936]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 00:07:22.848548 ignition[936]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:07:22.854143 ignition[936]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 00:07:22.856039 ignition[936]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 00:07:22.856039 ignition[936]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Aug 13 00:07:22.856039 ignition[936]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 00:07:22.856039 ignition[936]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:07:22.856039 ignition[936]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 00:07:22.856039 ignition[936]: INFO : files: files passed Aug 13 00:07:22.856039 ignition[936]: INFO : Ignition finished successfully Aug 13 00:07:22.858190 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 00:07:22.871713 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 00:07:22.878012 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 00:07:22.881099 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 00:07:22.881205 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 00:07:22.891141 initrd-setup-root-after-ignition[965]: grep: /sysroot/oem/oem-release: No such file or directory Aug 13 00:07:22.900052 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:07:22.900052 initrd-setup-root-after-ignition[967]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:07:22.903739 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 00:07:22.904349 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:07:22.908877 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 00:07:22.924259 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 00:07:22.957406 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 00:07:22.957547 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 00:07:22.961291 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 00:07:22.963244 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 00:07:22.966023 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 00:07:22.967359 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 00:07:22.990126 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:07:23.002404 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 00:07:23.016448 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:07:23.017842 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:07:23.021218 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 00:07:23.023234 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 00:07:23.023383 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 00:07:23.026123 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 00:07:23.028331 systemd[1]: Stopped target basic.target - Basic System. Aug 13 00:07:23.030128 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 00:07:23.032121 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 00:07:23.035685 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 00:07:23.039552 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 00:07:23.041536 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 00:07:23.043595 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 00:07:23.046482 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 00:07:23.048675 systemd[1]: Stopped target swap.target - Swaps. Aug 13 00:07:23.051118 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 00:07:23.051272 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 00:07:23.058412 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:07:23.061987 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:07:23.064232 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 00:07:23.064477 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:07:23.066511 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 00:07:23.066666 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 00:07:23.069730 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 00:07:23.069883 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 00:07:23.072027 systemd[1]: Stopped target paths.target - Path Units. Aug 13 00:07:23.073834 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 00:07:23.078162 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:07:23.081091 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 00:07:23.082241 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 00:07:23.083992 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 00:07:23.084150 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 00:07:23.085894 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 00:07:23.086021 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 00:07:23.087717 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 00:07:23.087854 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 00:07:23.089810 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 00:07:23.089933 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 00:07:23.102526 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 00:07:23.104623 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 00:07:23.105636 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 00:07:23.105798 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:07:23.108119 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 00:07:23.108262 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 00:07:23.116651 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 00:07:23.116768 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 00:07:23.131500 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 00:07:23.135655 ignition[991]: INFO : Ignition 2.19.0 Aug 13 00:07:23.135655 ignition[991]: INFO : Stage: umount Aug 13 00:07:23.139348 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 00:07:23.139348 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 13 00:07:23.139348 ignition[991]: INFO : umount: umount passed Aug 13 00:07:23.139348 ignition[991]: INFO : Ignition finished successfully Aug 13 00:07:23.141474 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 00:07:23.141631 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 00:07:23.143351 systemd[1]: Stopped target network.target - Network. Aug 13 00:07:23.144530 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 00:07:23.144637 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 00:07:23.149109 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 00:07:23.149185 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 00:07:23.151240 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 00:07:23.151305 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 00:07:23.152540 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 00:07:23.152599 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 00:07:23.156322 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 00:07:23.157876 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 00:07:23.160360 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 00:07:23.160479 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 00:07:23.163042 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 00:07:23.163289 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 00:07:23.167275 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 00:07:23.167437 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 00:07:23.169855 systemd-networkd[753]: eth0: DHCPv6 lease lost Aug 13 00:07:23.170733 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 00:07:23.170826 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:07:23.174331 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 00:07:23.174529 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 00:07:23.180435 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 00:07:23.180710 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:07:23.189246 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 00:07:23.190666 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 00:07:23.190764 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 00:07:23.193492 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 00:07:23.193558 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:07:23.196285 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 00:07:23.196349 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 00:07:23.198360 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:07:23.228572 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 00:07:23.228830 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:07:23.237010 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 00:07:23.237283 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 00:07:23.240587 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 00:07:23.240990 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 00:07:23.243120 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 00:07:23.243205 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:07:23.246628 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 00:07:23.246696 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 00:07:23.249707 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 00:07:23.249770 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 00:07:23.252435 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 00:07:23.252531 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 00:07:23.269387 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 00:07:23.270567 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 00:07:23.270648 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:07:23.274025 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 00:07:23.274129 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:07:23.275954 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 00:07:23.276015 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:07:23.278333 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 00:07:23.278398 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:07:23.281149 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 00:07:23.281243 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 00:07:23.285795 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 00:07:23.289339 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 00:07:23.304444 systemd[1]: Switching root. Aug 13 00:07:23.338495 systemd-journald[239]: Journal stopped Aug 13 00:07:24.298768 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Aug 13 00:07:24.298827 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 00:07:24.298851 kernel: SELinux: policy capability open_perms=1 Aug 13 00:07:24.298865 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 00:07:24.298875 kernel: SELinux: policy capability always_check_network=0 Aug 13 00:07:24.298885 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 00:07:24.298896 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 00:07:24.298905 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 00:07:24.298915 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 00:07:24.298925 kernel: audit: type=1403 audit(1755043643.617:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 00:07:24.298936 systemd[1]: Successfully loaded SELinux policy in 40.962ms. Aug 13 00:07:24.298950 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.342ms. Aug 13 00:07:24.298964 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 00:07:24.298975 systemd[1]: Detected virtualization kvm. Aug 13 00:07:24.298985 systemd[1]: Detected architecture arm64. Aug 13 00:07:24.298998 systemd[1]: Detected first boot. Aug 13 00:07:24.299008 systemd[1]: Initializing machine ID from VM UUID. Aug 13 00:07:24.299019 zram_generator::config[1057]: No configuration found. Aug 13 00:07:24.299030 systemd[1]: Populated /etc with preset unit settings. Aug 13 00:07:24.299041 systemd[1]: Queued start job for default target multi-user.target. Aug 13 00:07:24.299108 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 00:07:24.299125 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 00:07:24.299136 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 00:07:24.299166 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 00:07:24.299194 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 00:07:24.299205 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 00:07:24.299216 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 00:07:24.299290 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 00:07:24.299305 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 00:07:24.299320 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 00:07:24.299343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 00:07:24.299356 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 00:07:24.299367 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 00:07:24.299377 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 00:07:24.299392 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 00:07:24.299403 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 13 00:07:24.299433 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 00:07:24.299445 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 00:07:24.299459 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 00:07:24.299470 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 00:07:24.299481 systemd[1]: Reached target slices.target - Slice Units. Aug 13 00:07:24.299491 systemd[1]: Reached target swap.target - Swaps. Aug 13 00:07:24.299502 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 00:07:24.299512 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 00:07:24.299523 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 00:07:24.299535 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 00:07:24.299548 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 00:07:24.299559 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 00:07:24.299569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 00:07:24.299589 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 00:07:24.299600 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 00:07:24.299611 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 00:07:24.299626 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 00:07:24.299644 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 00:07:24.299670 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 00:07:24.299683 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 00:07:24.299694 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 00:07:24.299707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:07:24.299718 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 00:07:24.299729 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 00:07:24.299741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:07:24.299751 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:07:24.299762 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:07:24.299772 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 00:07:24.299785 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:07:24.299796 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 00:07:24.299806 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 00:07:24.299817 kernel: fuse: init (API version 7.39) Aug 13 00:07:24.299835 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 00:07:24.299846 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 00:07:24.299857 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 00:07:24.299868 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 00:07:24.299881 kernel: loop: module loaded Aug 13 00:07:24.299890 kernel: ACPI: bus type drm_connector registered Aug 13 00:07:24.299908 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 00:07:24.299925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 00:07:24.299937 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 00:07:24.299973 systemd-journald[1136]: Collecting audit messages is disabled. Aug 13 00:07:24.299998 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 00:07:24.300010 systemd-journald[1136]: Journal started Aug 13 00:07:24.300036 systemd-journald[1136]: Runtime Journal (/run/log/journal/8a4e80e1a456481392c9d3178d21d308) is 5.9M, max 47.3M, 41.4M free. Aug 13 00:07:24.302850 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 00:07:24.304735 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 00:07:24.306142 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 00:07:24.307500 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 00:07:24.308882 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 00:07:24.310378 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 00:07:24.312214 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 00:07:24.313860 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 00:07:24.314056 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 00:07:24.315587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:07:24.315761 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:07:24.317592 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:07:24.317767 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:07:24.319218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:07:24.319386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:07:24.321024 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 00:07:24.321482 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 00:07:24.322907 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:07:24.323178 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:07:24.325177 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 00:07:24.327811 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 00:07:24.329801 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 00:07:24.344623 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 00:07:24.355206 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 00:07:24.357663 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 00:07:24.359206 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 00:07:24.361270 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 00:07:24.364059 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 00:07:24.365489 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:07:24.369361 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 00:07:24.370621 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:07:24.373316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 00:07:24.374426 systemd-journald[1136]: Time spent on flushing to /var/log/journal/8a4e80e1a456481392c9d3178d21d308 is 14.565ms for 843 entries. Aug 13 00:07:24.374426 systemd-journald[1136]: System Journal (/var/log/journal/8a4e80e1a456481392c9d3178d21d308) is 8.0M, max 195.6M, 187.6M free. Aug 13 00:07:24.396664 systemd-journald[1136]: Received client request to flush runtime journal. Aug 13 00:07:24.377279 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 00:07:24.383024 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 00:07:24.387022 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 00:07:24.395189 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 00:07:24.397005 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 00:07:24.401026 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 00:07:24.403218 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 00:07:24.405942 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 00:07:24.415376 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 00:07:24.418038 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Aug 13 00:07:24.418061 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Aug 13 00:07:24.424824 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 00:07:24.429252 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 00:07:24.430775 udevadm[1202]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 13 00:07:24.455633 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 00:07:24.466290 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 00:07:24.479458 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Aug 13 00:07:24.479475 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Aug 13 00:07:24.483996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 00:07:24.836520 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 00:07:24.849273 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 00:07:24.869925 systemd-udevd[1215]: Using default interface naming scheme 'v255'. Aug 13 00:07:24.884755 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 00:07:24.900316 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 00:07:24.905124 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 00:07:24.919175 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Aug 13 00:07:24.942137 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1221) Aug 13 00:07:24.970673 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 00:07:24.982659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 00:07:25.052378 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 00:07:25.061992 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 00:07:25.065961 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 00:07:25.073340 systemd-networkd[1223]: lo: Link UP Aug 13 00:07:25.073345 systemd-networkd[1223]: lo: Gained carrier Aug 13 00:07:25.074385 systemd-networkd[1223]: Enumeration completed Aug 13 00:07:25.074522 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 00:07:25.077513 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 00:07:25.079108 systemd-networkd[1223]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:07:25.079114 systemd-networkd[1223]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 00:07:25.079927 systemd-networkd[1223]: eth0: Link UP Aug 13 00:07:25.079931 systemd-networkd[1223]: eth0: Gained carrier Aug 13 00:07:25.079945 systemd-networkd[1223]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 00:07:25.092607 lvm[1252]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:07:25.098179 systemd-networkd[1223]: eth0: DHCPv4 address 10.0.0.72/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 13 00:07:25.104500 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 00:07:25.124573 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 00:07:25.126234 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 00:07:25.137424 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 00:07:25.142839 lvm[1261]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 00:07:25.174676 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 00:07:25.176225 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 00:07:25.177461 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 00:07:25.177496 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 00:07:25.178515 systemd[1]: Reached target machines.target - Containers. Aug 13 00:07:25.180676 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 00:07:25.194281 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 00:07:25.196966 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 00:07:25.198241 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:07:25.199309 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 00:07:25.202277 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 00:07:25.205155 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 00:07:25.207500 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 00:07:25.220187 kernel: loop0: detected capacity change from 0 to 203944 Aug 13 00:07:25.219276 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 00:07:25.231257 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 00:07:25.232139 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 00:07:25.237098 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 00:07:25.272509 kernel: loop1: detected capacity change from 0 to 114432 Aug 13 00:07:25.322118 kernel: loop2: detected capacity change from 0 to 114328 Aug 13 00:07:25.372105 kernel: loop3: detected capacity change from 0 to 203944 Aug 13 00:07:25.385228 kernel: loop4: detected capacity change from 0 to 114432 Aug 13 00:07:25.395128 kernel: loop5: detected capacity change from 0 to 114328 Aug 13 00:07:25.400722 (sd-merge)[1285]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 13 00:07:25.401248 (sd-merge)[1285]: Merged extensions into '/usr'. Aug 13 00:07:25.413297 systemd[1]: Reloading requested from client PID 1269 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 00:07:25.413315 systemd[1]: Reloading... Aug 13 00:07:25.467119 zram_generator::config[1319]: No configuration found. Aug 13 00:07:25.514651 ldconfig[1265]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 00:07:25.565046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:07:25.613227 systemd[1]: Reloading finished in 199 ms. Aug 13 00:07:25.626984 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 00:07:25.628790 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 00:07:25.652320 systemd[1]: Starting ensure-sysext.service... Aug 13 00:07:25.654426 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 00:07:25.659881 systemd[1]: Reloading requested from client PID 1357 ('systemctl') (unit ensure-sysext.service)... Aug 13 00:07:25.659897 systemd[1]: Reloading... Aug 13 00:07:25.672362 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 00:07:25.672630 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 00:07:25.673290 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 00:07:25.673505 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Aug 13 00:07:25.673557 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Aug 13 00:07:25.676415 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:07:25.676430 systemd-tmpfiles[1358]: Skipping /boot Aug 13 00:07:25.683786 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 00:07:25.683807 systemd-tmpfiles[1358]: Skipping /boot Aug 13 00:07:25.717157 zram_generator::config[1387]: No configuration found. Aug 13 00:07:25.818562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:07:25.869343 systemd[1]: Reloading finished in 208 ms. Aug 13 00:07:25.883712 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 00:07:25.904496 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:07:25.910306 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 00:07:25.913470 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 00:07:25.919268 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 00:07:25.924317 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 00:07:25.932694 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:07:25.935545 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:07:25.939391 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:07:25.944187 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:07:25.947191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:07:25.947934 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:07:25.949236 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:07:25.954385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:07:25.954598 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:07:25.956728 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:07:25.957968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:07:25.960712 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 00:07:25.971956 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:07:25.981479 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:07:25.989479 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:07:25.992911 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:07:25.994462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:07:25.996333 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 00:07:26.001638 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 00:07:26.003434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:07:26.003614 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:07:26.006590 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:07:26.007361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:07:26.010293 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:07:26.010527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:07:26.018971 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 00:07:26.019913 systemd-resolved[1434]: Positive Trust Anchors: Aug 13 00:07:26.019947 systemd-resolved[1434]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 00:07:26.019980 systemd-resolved[1434]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 00:07:26.029699 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 00:07:26.033557 systemd-resolved[1434]: Defaulting to hostname 'linux'. Aug 13 00:07:26.034863 augenrules[1474]: No rules Aug 13 00:07:26.040745 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 00:07:26.043787 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 00:07:26.047761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 00:07:26.050374 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 00:07:26.051664 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 00:07:26.052412 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 00:07:26.054433 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:07:26.056255 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 00:07:26.058001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 00:07:26.058208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 00:07:26.059880 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 00:07:26.060067 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 00:07:26.061999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 00:07:26.062215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 00:07:26.063994 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 00:07:26.064260 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 00:07:26.068276 systemd[1]: Finished ensure-sysext.service. Aug 13 00:07:26.073808 systemd[1]: Reached target network.target - Network. Aug 13 00:07:26.074820 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 00:07:26.076331 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 00:07:26.076418 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 00:07:26.082333 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 00:07:26.083462 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 00:07:26.133573 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 00:07:26.134317 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 13 00:07:26.134370 systemd-timesyncd[1501]: Initial clock synchronization to Wed 2025-08-13 00:07:25.745714 UTC. Aug 13 00:07:26.135247 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 00:07:26.136504 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 00:07:26.137773 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 00:07:26.139025 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 00:07:26.140303 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 00:07:26.140343 systemd[1]: Reached target paths.target - Path Units. Aug 13 00:07:26.141249 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 00:07:26.142473 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 00:07:26.143640 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 00:07:26.144845 systemd[1]: Reached target timers.target - Timer Units. Aug 13 00:07:26.146595 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 00:07:26.149487 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 00:07:26.151723 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 00:07:26.164322 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 00:07:26.165562 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 00:07:26.166596 systemd[1]: Reached target basic.target - Basic System. Aug 13 00:07:26.167751 systemd[1]: System is tainted: cgroupsv1 Aug 13 00:07:26.167802 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:07:26.167827 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 00:07:26.169300 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 00:07:26.171714 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 00:07:26.173979 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 00:07:26.177282 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 00:07:26.179204 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 00:07:26.180635 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 00:07:26.185277 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 00:07:26.189378 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 00:07:26.194427 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 00:07:26.204942 jq[1507]: false Aug 13 00:07:26.210407 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 00:07:26.220880 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 00:07:26.222200 extend-filesystems[1509]: Found loop3 Aug 13 00:07:26.222200 extend-filesystems[1509]: Found loop4 Aug 13 00:07:26.222200 extend-filesystems[1509]: Found loop5 Aug 13 00:07:26.226423 extend-filesystems[1509]: Found vda Aug 13 00:07:26.226423 extend-filesystems[1509]: Found vda1 Aug 13 00:07:26.226423 extend-filesystems[1509]: Found vda2 Aug 13 00:07:26.226423 extend-filesystems[1509]: Found vda3 Aug 13 00:07:26.226423 extend-filesystems[1509]: Found usr Aug 13 00:07:26.226423 extend-filesystems[1509]: Found vda4 Aug 13 00:07:26.226423 extend-filesystems[1509]: Found vda6 Aug 13 00:07:26.226423 extend-filesystems[1509]: Found vda7 Aug 13 00:07:26.226423 extend-filesystems[1509]: Found vda9 Aug 13 00:07:26.226423 extend-filesystems[1509]: Checking size of /dev/vda9 Aug 13 00:07:26.223027 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 00:07:26.233192 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 00:07:26.242136 jq[1529]: true Aug 13 00:07:26.243281 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 00:07:26.243528 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 00:07:26.243828 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 00:07:26.244023 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 00:07:26.248473 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 00:07:26.248727 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 00:07:26.256121 dbus-daemon[1506]: [system] SELinux support is enabled Aug 13 00:07:26.260491 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 00:07:26.264536 extend-filesystems[1509]: Resized partition /dev/vda9 Aug 13 00:07:26.268090 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1218) Aug 13 00:07:26.277221 extend-filesystems[1539]: resize2fs 1.47.1 (20-May-2024) Aug 13 00:07:26.279547 jq[1537]: true Aug 13 00:07:26.283997 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 13 00:07:26.290510 (ntainerd)[1541]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 00:07:26.295161 tar[1534]: linux-arm64/helm Aug 13 00:07:26.304156 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 00:07:26.304189 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 00:07:26.308299 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 00:07:26.308334 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 00:07:26.324184 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 13 00:07:26.347390 update_engine[1524]: I20250813 00:07:26.331015 1524 main.cc:92] Flatcar Update Engine starting Aug 13 00:07:26.347390 update_engine[1524]: I20250813 00:07:26.339085 1524 update_check_scheduler.cc:74] Next update check in 10m16s Aug 13 00:07:26.339045 systemd[1]: Started update-engine.service - Update Engine. Aug 13 00:07:26.353609 extend-filesystems[1539]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 00:07:26.353609 extend-filesystems[1539]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 00:07:26.353609 extend-filesystems[1539]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 13 00:07:26.340867 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 00:07:26.361067 extend-filesystems[1509]: Resized filesystem in /dev/vda9 Aug 13 00:07:26.347865 systemd-logind[1517]: Watching system buttons on /dev/input/event0 (Power Button) Aug 13 00:07:26.348543 systemd-logind[1517]: New seat seat0. Aug 13 00:07:26.352280 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 00:07:26.353565 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 00:07:26.359767 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 00:07:26.360025 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 00:07:26.393645 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Aug 13 00:07:26.396286 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 00:07:26.398252 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 13 00:07:26.429692 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 00:07:26.542845 containerd[1541]: time="2025-08-13T00:07:26.541113080Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 00:07:26.568478 containerd[1541]: time="2025-08-13T00:07:26.568416840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:07:26.570416 containerd[1541]: time="2025-08-13T00:07:26.570010880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:07:26.570416 containerd[1541]: time="2025-08-13T00:07:26.570061200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 00:07:26.570416 containerd[1541]: time="2025-08-13T00:07:26.570092360Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 00:07:26.570416 containerd[1541]: time="2025-08-13T00:07:26.570262760Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 00:07:26.570416 containerd[1541]: time="2025-08-13T00:07:26.570280320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 00:07:26.570416 containerd[1541]: time="2025-08-13T00:07:26.570338320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:07:26.570416 containerd[1541]: time="2025-08-13T00:07:26.570350800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:07:26.571006 containerd[1541]: time="2025-08-13T00:07:26.570974920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:07:26.571173 containerd[1541]: time="2025-08-13T00:07:26.571155840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 00:07:26.571239 containerd[1541]: time="2025-08-13T00:07:26.571224600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:07:26.571345 containerd[1541]: time="2025-08-13T00:07:26.571329920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 00:07:26.571814 containerd[1541]: time="2025-08-13T00:07:26.571542480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:07:26.571814 containerd[1541]: time="2025-08-13T00:07:26.571777000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 00:07:26.572232 containerd[1541]: time="2025-08-13T00:07:26.572206240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 00:07:26.572358 containerd[1541]: time="2025-08-13T00:07:26.572292520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 00:07:26.572500 containerd[1541]: time="2025-08-13T00:07:26.572484800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 00:07:26.572670 containerd[1541]: time="2025-08-13T00:07:26.572651080Z" level=info msg="metadata content store policy set" policy=shared Aug 13 00:07:26.585455 containerd[1541]: time="2025-08-13T00:07:26.585309240Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 00:07:26.586592 containerd[1541]: time="2025-08-13T00:07:26.585655480Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 00:07:26.586592 containerd[1541]: time="2025-08-13T00:07:26.585698880Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 00:07:26.586592 containerd[1541]: time="2025-08-13T00:07:26.585717600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 00:07:26.586592 containerd[1541]: time="2025-08-13T00:07:26.585732520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 00:07:26.586592 containerd[1541]: time="2025-08-13T00:07:26.585916760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 00:07:26.586592 containerd[1541]: time="2025-08-13T00:07:26.586405160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 00:07:26.586592 containerd[1541]: time="2025-08-13T00:07:26.586518000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 00:07:26.586592 containerd[1541]: time="2025-08-13T00:07:26.586535640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 00:07:26.586592 containerd[1541]: time="2025-08-13T00:07:26.586550560Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 00:07:26.588466 containerd[1541]: time="2025-08-13T00:07:26.586565200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 00:07:26.588545 containerd[1541]: time="2025-08-13T00:07:26.588480920Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 00:07:26.588545 containerd[1541]: time="2025-08-13T00:07:26.588514120Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 00:07:26.588545 containerd[1541]: time="2025-08-13T00:07:26.588535320Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 00:07:26.588608 containerd[1541]: time="2025-08-13T00:07:26.588555000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 00:07:26.588608 containerd[1541]: time="2025-08-13T00:07:26.588573280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 00:07:26.588608 containerd[1541]: time="2025-08-13T00:07:26.588590360Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 00:07:26.588657 containerd[1541]: time="2025-08-13T00:07:26.588605920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 00:07:26.588657 containerd[1541]: time="2025-08-13T00:07:26.588637240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588693 containerd[1541]: time="2025-08-13T00:07:26.588652680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588693 containerd[1541]: time="2025-08-13T00:07:26.588670360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588693 containerd[1541]: time="2025-08-13T00:07:26.588686640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588748 containerd[1541]: time="2025-08-13T00:07:26.588703760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588748 containerd[1541]: time="2025-08-13T00:07:26.588724920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588748 containerd[1541]: time="2025-08-13T00:07:26.588740680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588804 containerd[1541]: time="2025-08-13T00:07:26.588754520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588804 containerd[1541]: time="2025-08-13T00:07:26.588771880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588804 containerd[1541]: time="2025-08-13T00:07:26.588791640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588857 containerd[1541]: time="2025-08-13T00:07:26.588807640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588857 containerd[1541]: time="2025-08-13T00:07:26.588823680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588857 containerd[1541]: time="2025-08-13T00:07:26.588840440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588913 containerd[1541]: time="2025-08-13T00:07:26.588863080Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 00:07:26.588913 containerd[1541]: time="2025-08-13T00:07:26.588893440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588947 containerd[1541]: time="2025-08-13T00:07:26.588917960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.588947 containerd[1541]: time="2025-08-13T00:07:26.588934520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 00:07:26.589475 containerd[1541]: time="2025-08-13T00:07:26.589069560Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 00:07:26.589475 containerd[1541]: time="2025-08-13T00:07:26.589114600Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 00:07:26.589475 containerd[1541]: time="2025-08-13T00:07:26.589138760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 00:07:26.589475 containerd[1541]: time="2025-08-13T00:07:26.589157360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 00:07:26.589475 containerd[1541]: time="2025-08-13T00:07:26.589175960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.589475 containerd[1541]: time="2025-08-13T00:07:26.589199160Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 00:07:26.589475 containerd[1541]: time="2025-08-13T00:07:26.589214560Z" level=info msg="NRI interface is disabled by configuration." Aug 13 00:07:26.589475 containerd[1541]: time="2025-08-13T00:07:26.589235720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 00:07:26.590058 containerd[1541]: time="2025-08-13T00:07:26.589724880Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 00:07:26.590058 containerd[1541]: time="2025-08-13T00:07:26.589898200Z" level=info msg="Connect containerd service" Aug 13 00:07:26.590058 containerd[1541]: time="2025-08-13T00:07:26.589963600Z" level=info msg="using legacy CRI server" Aug 13 00:07:26.590058 containerd[1541]: time="2025-08-13T00:07:26.589978960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 00:07:26.591212 containerd[1541]: time="2025-08-13T00:07:26.590624960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 00:07:26.591703 containerd[1541]: time="2025-08-13T00:07:26.591670440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:07:26.592364 containerd[1541]: time="2025-08-13T00:07:26.592181960Z" level=info msg="Start subscribing containerd event" Aug 13 00:07:26.592410 containerd[1541]: time="2025-08-13T00:07:26.592394320Z" level=info msg="Start recovering state" Aug 13 00:07:26.592537 containerd[1541]: time="2025-08-13T00:07:26.592471480Z" level=info msg="Start event monitor" Aug 13 00:07:26.592537 containerd[1541]: time="2025-08-13T00:07:26.592488760Z" level=info msg="Start snapshots syncer" Aug 13 00:07:26.592537 containerd[1541]: time="2025-08-13T00:07:26.592499720Z" level=info msg="Start cni network conf syncer for default" Aug 13 00:07:26.592537 containerd[1541]: time="2025-08-13T00:07:26.592507480Z" level=info msg="Start streaming server" Aug 13 00:07:26.593187 containerd[1541]: time="2025-08-13T00:07:26.593132360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 00:07:26.595179 containerd[1541]: time="2025-08-13T00:07:26.593191840Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 00:07:26.595179 containerd[1541]: time="2025-08-13T00:07:26.593245680Z" level=info msg="containerd successfully booted in 0.053176s" Aug 13 00:07:26.593389 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 00:07:26.684940 tar[1534]: linux-arm64/LICENSE Aug 13 00:07:26.684940 tar[1534]: linux-arm64/README.md Aug 13 00:07:26.700993 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 00:07:26.886239 systemd-networkd[1223]: eth0: Gained IPv6LL Aug 13 00:07:26.889668 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 00:07:26.891758 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 00:07:26.903369 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 13 00:07:26.906934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:07:26.912158 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 00:07:26.939913 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 00:07:26.941779 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 13 00:07:26.942078 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 13 00:07:26.943835 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 00:07:27.278253 sshd_keygen[1528]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 00:07:27.297748 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 00:07:27.307337 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 00:07:27.315141 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 00:07:27.315408 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 00:07:27.318237 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 00:07:27.337327 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 00:07:27.354388 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 00:07:27.356648 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 13 00:07:27.357982 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 00:07:27.538774 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:07:27.540378 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 00:07:27.543187 (kubelet)[1642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:07:27.545186 systemd[1]: Startup finished in 6.491s (kernel) + 3.992s (userspace) = 10.483s. Aug 13 00:07:27.996519 kubelet[1642]: E0813 00:07:27.996398 1642 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:07:27.998633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:07:27.998819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:07:30.744595 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 00:07:30.756336 systemd[1]: Started sshd@0-10.0.0.72:22-10.0.0.1:34822.service - OpenSSH per-connection server daemon (10.0.0.1:34822). Aug 13 00:07:30.809179 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 34822 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:07:30.812596 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:30.825295 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 00:07:30.834354 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 00:07:30.836278 systemd-logind[1517]: New session 1 of user core. Aug 13 00:07:30.844758 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 00:07:30.847714 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 00:07:30.856669 (systemd)[1662]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 00:07:30.959222 systemd[1662]: Queued start job for default target default.target. Aug 13 00:07:30.960146 systemd[1662]: Created slice app.slice - User Application Slice. Aug 13 00:07:30.960172 systemd[1662]: Reached target paths.target - Paths. Aug 13 00:07:30.960184 systemd[1662]: Reached target timers.target - Timers. Aug 13 00:07:30.969216 systemd[1662]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 00:07:30.976037 systemd[1662]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 00:07:30.976116 systemd[1662]: Reached target sockets.target - Sockets. Aug 13 00:07:30.976129 systemd[1662]: Reached target basic.target - Basic System. Aug 13 00:07:30.976170 systemd[1662]: Reached target default.target - Main User Target. Aug 13 00:07:30.976196 systemd[1662]: Startup finished in 113ms. Aug 13 00:07:30.976378 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 00:07:30.978822 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 00:07:31.059852 systemd[1]: Started sshd@1-10.0.0.72:22-10.0.0.1:34834.service - OpenSSH per-connection server daemon (10.0.0.1:34834). Aug 13 00:07:31.096495 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 34834 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:07:31.097791 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:31.102094 systemd-logind[1517]: New session 2 of user core. Aug 13 00:07:31.118455 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 00:07:31.171497 sshd[1674]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:31.187418 systemd[1]: Started sshd@2-10.0.0.72:22-10.0.0.1:34844.service - OpenSSH per-connection server daemon (10.0.0.1:34844). Aug 13 00:07:31.188101 systemd[1]: sshd@1-10.0.0.72:22-10.0.0.1:34834.service: Deactivated successfully. Aug 13 00:07:31.190577 systemd-logind[1517]: Session 2 logged out. Waiting for processes to exit. Aug 13 00:07:31.191602 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 00:07:31.192913 systemd-logind[1517]: Removed session 2. Aug 13 00:07:31.225833 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 34844 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:07:31.227201 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:31.231215 systemd-logind[1517]: New session 3 of user core. Aug 13 00:07:31.242392 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 00:07:31.291035 sshd[1679]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:31.301416 systemd[1]: Started sshd@3-10.0.0.72:22-10.0.0.1:34856.service - OpenSSH per-connection server daemon (10.0.0.1:34856). Aug 13 00:07:31.301819 systemd[1]: sshd@2-10.0.0.72:22-10.0.0.1:34844.service: Deactivated successfully. Aug 13 00:07:31.304413 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 00:07:31.304586 systemd-logind[1517]: Session 3 logged out. Waiting for processes to exit. Aug 13 00:07:31.306121 systemd-logind[1517]: Removed session 3. Aug 13 00:07:31.337197 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 34856 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:07:31.338870 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:31.342753 systemd-logind[1517]: New session 4 of user core. Aug 13 00:07:31.352370 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 00:07:31.403252 sshd[1687]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:31.415355 systemd[1]: Started sshd@4-10.0.0.72:22-10.0.0.1:34868.service - OpenSSH per-connection server daemon (10.0.0.1:34868). Aug 13 00:07:31.415761 systemd[1]: sshd@3-10.0.0.72:22-10.0.0.1:34856.service: Deactivated successfully. Aug 13 00:07:31.417805 systemd-logind[1517]: Session 4 logged out. Waiting for processes to exit. Aug 13 00:07:31.418368 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 00:07:31.419910 systemd-logind[1517]: Removed session 4. Aug 13 00:07:31.453498 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 34868 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:07:31.454789 sshd[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:31.458938 systemd-logind[1517]: New session 5 of user core. Aug 13 00:07:31.469364 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 00:07:31.528592 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 00:07:31.529256 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:07:31.541980 sudo[1702]: pam_unix(sudo:session): session closed for user root Aug 13 00:07:31.543813 sshd[1695]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:31.552373 systemd[1]: Started sshd@5-10.0.0.72:22-10.0.0.1:34872.service - OpenSSH per-connection server daemon (10.0.0.1:34872). Aug 13 00:07:31.552879 systemd[1]: sshd@4-10.0.0.72:22-10.0.0.1:34868.service: Deactivated successfully. Aug 13 00:07:31.554610 systemd-logind[1517]: Session 5 logged out. Waiting for processes to exit. Aug 13 00:07:31.555413 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 00:07:31.556667 systemd-logind[1517]: Removed session 5. Aug 13 00:07:31.587990 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 34872 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:07:31.589488 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:31.593860 systemd-logind[1517]: New session 6 of user core. Aug 13 00:07:31.601404 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 00:07:31.651591 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 00:07:31.651863 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:07:31.655613 sudo[1712]: pam_unix(sudo:session): session closed for user root Aug 13 00:07:31.660640 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 00:07:31.660922 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:07:31.678353 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 00:07:31.679802 auditctl[1715]: No rules Aug 13 00:07:31.680677 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 00:07:31.680930 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 00:07:31.682711 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 00:07:31.706918 augenrules[1734]: No rules Aug 13 00:07:31.708204 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 00:07:31.709240 sudo[1711]: pam_unix(sudo:session): session closed for user root Aug 13 00:07:31.710953 sshd[1704]: pam_unix(sshd:session): session closed for user core Aug 13 00:07:31.718324 systemd[1]: Started sshd@6-10.0.0.72:22-10.0.0.1:34884.service - OpenSSH per-connection server daemon (10.0.0.1:34884). Aug 13 00:07:31.718798 systemd[1]: sshd@5-10.0.0.72:22-10.0.0.1:34872.service: Deactivated successfully. Aug 13 00:07:31.720269 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 00:07:31.720858 systemd-logind[1517]: Session 6 logged out. Waiting for processes to exit. Aug 13 00:07:31.722134 systemd-logind[1517]: Removed session 6. Aug 13 00:07:31.753122 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 34884 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:07:31.754423 sshd[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:07:31.758129 systemd-logind[1517]: New session 7 of user core. Aug 13 00:07:31.768414 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 00:07:31.817470 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 00:07:31.817767 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 00:07:32.135341 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 00:07:32.135530 (dockerd)[1765]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 00:07:32.426380 dockerd[1765]: time="2025-08-13T00:07:32.426237365Z" level=info msg="Starting up" Aug 13 00:07:32.767893 dockerd[1765]: time="2025-08-13T00:07:32.767753490Z" level=info msg="Loading containers: start." Aug 13 00:07:32.852147 kernel: Initializing XFRM netlink socket Aug 13 00:07:32.922406 systemd-networkd[1223]: docker0: Link UP Aug 13 00:07:32.944595 dockerd[1765]: time="2025-08-13T00:07:32.944536232Z" level=info msg="Loading containers: done." Aug 13 00:07:32.962216 dockerd[1765]: time="2025-08-13T00:07:32.962156251Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 00:07:32.962374 dockerd[1765]: time="2025-08-13T00:07:32.962288551Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 00:07:32.962437 dockerd[1765]: time="2025-08-13T00:07:32.962415196Z" level=info msg="Daemon has completed initialization" Aug 13 00:07:32.992840 dockerd[1765]: time="2025-08-13T00:07:32.992687415Z" level=info msg="API listen on /run/docker.sock" Aug 13 00:07:32.993106 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 00:07:33.610166 containerd[1541]: time="2025-08-13T00:07:33.610118558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 00:07:34.253433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3830330932.mount: Deactivated successfully. Aug 13 00:07:35.084110 containerd[1541]: time="2025-08-13T00:07:35.084041716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:35.084644 containerd[1541]: time="2025-08-13T00:07:35.084605351Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651815" Aug 13 00:07:35.085616 containerd[1541]: time="2025-08-13T00:07:35.085558299Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:35.090170 containerd[1541]: time="2025-08-13T00:07:35.089786860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:35.091164 containerd[1541]: time="2025-08-13T00:07:35.090865712Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 1.480696851s" Aug 13 00:07:35.091164 containerd[1541]: time="2025-08-13T00:07:35.090920109Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 13 00:07:35.094966 containerd[1541]: time="2025-08-13T00:07:35.094897257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 00:07:36.086377 containerd[1541]: time="2025-08-13T00:07:36.086330926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:36.087247 containerd[1541]: time="2025-08-13T00:07:36.087197052Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460285" Aug 13 00:07:36.091088 containerd[1541]: time="2025-08-13T00:07:36.088512678Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:36.094386 containerd[1541]: time="2025-08-13T00:07:36.094331302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:36.095743 containerd[1541]: time="2025-08-13T00:07:36.095708219Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.000771056s" Aug 13 00:07:36.095743 containerd[1541]: time="2025-08-13T00:07:36.095745625Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 13 00:07:36.096406 containerd[1541]: time="2025-08-13T00:07:36.096380301Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 00:07:37.082107 containerd[1541]: time="2025-08-13T00:07:37.081678020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:37.083396 containerd[1541]: time="2025-08-13T00:07:37.083362422Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125091" Aug 13 00:07:37.084182 containerd[1541]: time="2025-08-13T00:07:37.084145883Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:37.089260 containerd[1541]: time="2025-08-13T00:07:37.089203827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:37.089916 containerd[1541]: time="2025-08-13T00:07:37.089872770Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 993.458671ms" Aug 13 00:07:37.089916 containerd[1541]: time="2025-08-13T00:07:37.089911706Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 13 00:07:37.090376 containerd[1541]: time="2025-08-13T00:07:37.090353351Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 00:07:38.019327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3853199934.mount: Deactivated successfully. Aug 13 00:07:38.020781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 00:07:38.028253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:07:38.209622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:07:38.224927 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 00:07:38.264510 kubelet[1995]: E0813 00:07:38.264461 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 00:07:38.267345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 00:07:38.267496 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 00:07:38.471989 containerd[1541]: time="2025-08-13T00:07:38.471537802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:38.472608 containerd[1541]: time="2025-08-13T00:07:38.472584844Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915995" Aug 13 00:07:38.473564 containerd[1541]: time="2025-08-13T00:07:38.473543408Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:38.475517 containerd[1541]: time="2025-08-13T00:07:38.475485650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:38.476170 containerd[1541]: time="2025-08-13T00:07:38.476023292Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.385638334s" Aug 13 00:07:38.476170 containerd[1541]: time="2025-08-13T00:07:38.476056437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 13 00:07:38.476695 containerd[1541]: time="2025-08-13T00:07:38.476505876Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 00:07:38.971622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252020411.mount: Deactivated successfully. Aug 13 00:07:39.681019 containerd[1541]: time="2025-08-13T00:07:39.680971746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:39.682002 containerd[1541]: time="2025-08-13T00:07:39.681806777Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 13 00:07:39.682931 containerd[1541]: time="2025-08-13T00:07:39.682899978Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:39.687040 containerd[1541]: time="2025-08-13T00:07:39.686701388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:39.688719 containerd[1541]: time="2025-08-13T00:07:39.688686538Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.212148651s" Aug 13 00:07:39.688820 containerd[1541]: time="2025-08-13T00:07:39.688803542Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 13 00:07:39.689313 containerd[1541]: time="2025-08-13T00:07:39.689293067Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 00:07:40.115960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111364277.mount: Deactivated successfully. Aug 13 00:07:40.121065 containerd[1541]: time="2025-08-13T00:07:40.120870598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:40.121689 containerd[1541]: time="2025-08-13T00:07:40.121658992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 13 00:07:40.122665 containerd[1541]: time="2025-08-13T00:07:40.122439534Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:40.125054 containerd[1541]: time="2025-08-13T00:07:40.125018153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:40.126194 containerd[1541]: time="2025-08-13T00:07:40.126148515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 436.760018ms" Aug 13 00:07:40.126287 containerd[1541]: time="2025-08-13T00:07:40.126193130Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 13 00:07:40.126694 containerd[1541]: time="2025-08-13T00:07:40.126669498Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 00:07:40.684471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1087694768.mount: Deactivated successfully. Aug 13 00:07:41.980904 containerd[1541]: time="2025-08-13T00:07:41.980483711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:41.981982 containerd[1541]: time="2025-08-13T00:07:41.981705412Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Aug 13 00:07:41.982838 containerd[1541]: time="2025-08-13T00:07:41.982773950Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:41.986172 containerd[1541]: time="2025-08-13T00:07:41.986127922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:07:41.987605 containerd[1541]: time="2025-08-13T00:07:41.987468289Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.860764597s" Aug 13 00:07:41.987605 containerd[1541]: time="2025-08-13T00:07:41.987505290Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 13 00:07:47.368435 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:07:47.381388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:07:47.406227 systemd[1]: Reloading requested from client PID 2148 ('systemctl') (unit session-7.scope)... Aug 13 00:07:47.406245 systemd[1]: Reloading... Aug 13 00:07:47.479092 zram_generator::config[2187]: No configuration found. Aug 13 00:07:47.693582 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:07:47.750742 systemd[1]: Reloading finished in 344 ms. Aug 13 00:07:47.801953 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 00:07:47.802142 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 00:07:47.802615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:07:47.805054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:07:47.920560 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:07:47.928596 (kubelet)[2245]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:07:47.969903 kubelet[2245]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:07:47.969903 kubelet[2245]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:07:47.969903 kubelet[2245]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:07:47.970529 kubelet[2245]: I0813 00:07:47.970465 2245 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:07:48.427907 kubelet[2245]: I0813 00:07:48.427510 2245 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:07:48.427907 kubelet[2245]: I0813 00:07:48.427553 2245 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:07:48.428712 kubelet[2245]: I0813 00:07:48.428681 2245 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:07:48.497411 kubelet[2245]: E0813 00:07:48.497364 2245 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:07:48.499165 kubelet[2245]: I0813 00:07:48.499140 2245 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:07:48.507267 kubelet[2245]: E0813 00:07:48.507202 2245 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:07:48.507267 kubelet[2245]: I0813 00:07:48.507247 2245 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:07:48.511538 kubelet[2245]: I0813 00:07:48.511508 2245 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:07:48.513142 kubelet[2245]: I0813 00:07:48.513107 2245 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:07:48.513344 kubelet[2245]: I0813 00:07:48.513292 2245 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:07:48.513566 kubelet[2245]: I0813 00:07:48.513337 2245 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:07:48.513730 kubelet[2245]: I0813 00:07:48.513713 2245 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:07:48.513730 kubelet[2245]: I0813 00:07:48.513725 2245 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:07:48.514188 kubelet[2245]: I0813 00:07:48.514170 2245 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:07:48.521792 kubelet[2245]: I0813 00:07:48.520912 2245 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:07:48.521792 kubelet[2245]: I0813 00:07:48.520967 2245 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:07:48.521792 kubelet[2245]: I0813 00:07:48.520996 2245 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:07:48.521792 kubelet[2245]: I0813 00:07:48.521153 2245 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:07:48.523012 kubelet[2245]: W0813 00:07:48.522850 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Aug 13 00:07:48.523181 kubelet[2245]: E0813 00:07:48.523157 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:07:48.523583 kubelet[2245]: W0813 00:07:48.523503 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Aug 13 00:07:48.523675 kubelet[2245]: E0813 00:07:48.523660 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:07:48.525639 kubelet[2245]: I0813 00:07:48.525616 2245 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:07:48.526456 kubelet[2245]: I0813 00:07:48.526441 2245 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:07:48.526694 kubelet[2245]: W0813 00:07:48.526682 2245 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 00:07:48.529990 kubelet[2245]: I0813 00:07:48.528992 2245 server.go:1274] "Started kubelet" Aug 13 00:07:48.530159 kubelet[2245]: I0813 00:07:48.530045 2245 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:07:48.532216 kubelet[2245]: I0813 00:07:48.532150 2245 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:07:48.532545 kubelet[2245]: I0813 00:07:48.532521 2245 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:07:48.532579 kubelet[2245]: I0813 00:07:48.532528 2245 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:07:48.532925 kubelet[2245]: I0813 00:07:48.532893 2245 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:07:48.534270 kubelet[2245]: I0813 00:07:48.534144 2245 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:07:48.536906 kubelet[2245]: I0813 00:07:48.536873 2245 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:07:48.537049 kubelet[2245]: I0813 00:07:48.537018 2245 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:07:48.537459 kubelet[2245]: I0813 00:07:48.537361 2245 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:07:48.538494 kubelet[2245]: W0813 00:07:48.538117 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Aug 13 00:07:48.538494 kubelet[2245]: E0813 00:07:48.538329 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:07:48.538629 kubelet[2245]: E0813 00:07:48.538495 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="200ms" Aug 13 00:07:48.538760 kubelet[2245]: E0813 00:07:48.538729 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:07:48.539123 kubelet[2245]: I0813 00:07:48.539098 2245 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:07:48.539243 kubelet[2245]: I0813 00:07:48.539225 2245 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:07:48.540288 kubelet[2245]: E0813 00:07:48.535383 2245 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2aeafbf78055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:07:48.528955477 +0000 UTC m=+0.596316995,LastTimestamp:2025-08-13 00:07:48.528955477 +0000 UTC m=+0.596316995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:07:48.540638 kubelet[2245]: E0813 00:07:48.540507 2245 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:07:48.541274 kubelet[2245]: I0813 00:07:48.541252 2245 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:07:48.560850 kubelet[2245]: I0813 00:07:48.560341 2245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:07:48.561897 kubelet[2245]: I0813 00:07:48.561865 2245 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:07:48.561897 kubelet[2245]: I0813 00:07:48.561895 2245 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:07:48.562039 kubelet[2245]: I0813 00:07:48.561913 2245 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:07:48.562039 kubelet[2245]: E0813 00:07:48.561974 2245 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:07:48.562326 kubelet[2245]: I0813 00:07:48.562308 2245 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:07:48.562326 kubelet[2245]: I0813 00:07:48.562325 2245 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:07:48.562374 kubelet[2245]: I0813 00:07:48.562347 2245 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:07:48.562969 kubelet[2245]: W0813 00:07:48.562941 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Aug 13 00:07:48.563016 kubelet[2245]: E0813 00:07:48.562989 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:07:48.639179 kubelet[2245]: E0813 00:07:48.639145 2245 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:07:48.640710 kubelet[2245]: I0813 00:07:48.640684 2245 policy_none.go:49] "None policy: Start" Aug 13 00:07:48.641841 kubelet[2245]: I0813 00:07:48.641811 2245 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:07:48.641841 kubelet[2245]: I0813 00:07:48.641849 2245 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:07:48.654484 kubelet[2245]: I0813 00:07:48.654403 2245 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:07:48.655662 kubelet[2245]: I0813 00:07:48.654709 2245 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:07:48.655662 kubelet[2245]: I0813 00:07:48.654728 2245 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:07:48.655662 kubelet[2245]: I0813 00:07:48.655562 2245 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:07:48.657119 kubelet[2245]: E0813 00:07:48.657096 2245 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 00:07:48.738363 kubelet[2245]: I0813 00:07:48.738235 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:48.738363 kubelet[2245]: I0813 00:07:48.738281 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:48.738363 kubelet[2245]: I0813 00:07:48.738306 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/713d11042bca2659b7c34acdc43c0bda-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"713d11042bca2659b7c34acdc43c0bda\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:07:48.738503 kubelet[2245]: I0813 00:07:48.738368 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/713d11042bca2659b7c34acdc43c0bda-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"713d11042bca2659b7c34acdc43c0bda\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:07:48.738503 kubelet[2245]: I0813 00:07:48.738438 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:48.738503 kubelet[2245]: I0813 00:07:48.738456 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:48.738503 kubelet[2245]: I0813 00:07:48.738476 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:48.738618 kubelet[2245]: I0813 00:07:48.738495 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:07:48.738618 kubelet[2245]: I0813 00:07:48.738520 2245 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/713d11042bca2659b7c34acdc43c0bda-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"713d11042bca2659b7c34acdc43c0bda\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:07:48.739081 kubelet[2245]: E0813 00:07:48.739022 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="400ms" Aug 13 00:07:48.756463 kubelet[2245]: I0813 00:07:48.756437 2245 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:07:48.757012 kubelet[2245]: E0813 00:07:48.756977 2245 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Aug 13 00:07:48.959065 kubelet[2245]: I0813 00:07:48.959033 2245 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:07:48.959404 kubelet[2245]: E0813 00:07:48.959368 2245 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Aug 13 00:07:48.969235 kubelet[2245]: E0813 00:07:48.969204 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:48.969884 containerd[1541]: time="2025-08-13T00:07:48.969817248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:713d11042bca2659b7c34acdc43c0bda,Namespace:kube-system,Attempt:0,}" Aug 13 00:07:48.972463 kubelet[2245]: E0813 00:07:48.972433 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:48.973427 containerd[1541]: time="2025-08-13T00:07:48.972991246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 00:07:48.973822 kubelet[2245]: E0813 00:07:48.973802 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:48.974188 containerd[1541]: time="2025-08-13T00:07:48.974159121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 00:07:49.140159 kubelet[2245]: E0813 00:07:49.139992 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="800ms" Aug 13 00:07:49.355491 kubelet[2245]: W0813 00:07:49.355410 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Aug 13 00:07:49.355491 kubelet[2245]: E0813 00:07:49.355489 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:07:49.360823 kubelet[2245]: I0813 00:07:49.360781 2245 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:07:49.361156 kubelet[2245]: E0813 00:07:49.361123 2245 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Aug 13 00:07:49.419007 kubelet[2245]: W0813 00:07:49.418820 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Aug 13 00:07:49.419007 kubelet[2245]: E0813 00:07:49.418889 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:07:49.527923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4129064816.mount: Deactivated successfully. Aug 13 00:07:49.572476 kubelet[2245]: W0813 00:07:49.572412 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Aug 13 00:07:49.572620 kubelet[2245]: E0813 00:07:49.572479 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:07:49.724522 containerd[1541]: time="2025-08-13T00:07:49.724342841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:07:49.734465 containerd[1541]: time="2025-08-13T00:07:49.734403246Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 13 00:07:49.746765 containerd[1541]: time="2025-08-13T00:07:49.746714205Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:07:49.756415 kubelet[2245]: W0813 00:07:49.756343 2245 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.72:6443: connect: connection refused Aug 13 00:07:49.756562 kubelet[2245]: E0813 00:07:49.756423 2245 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:07:49.760127 containerd[1541]: time="2025-08-13T00:07:49.760017364Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:07:49.810187 containerd[1541]: time="2025-08-13T00:07:49.810063370Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:07:49.823857 containerd[1541]: time="2025-08-13T00:07:49.823798893Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:07:49.882021 containerd[1541]: time="2025-08-13T00:07:49.881936862Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 00:07:49.883613 kubelet[2245]: E0813 00:07:49.883500 2245 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2aeafbf78055 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 00:07:48.528955477 +0000 UTC m=+0.596316995,LastTimestamp:2025-08-13 00:07:48.528955477 +0000 UTC m=+0.596316995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 00:07:49.941251 kubelet[2245]: E0813 00:07:49.941180 2245 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.72:6443: connect: connection refused" interval="1.6s" Aug 13 00:07:49.959153 containerd[1541]: time="2025-08-13T00:07:49.959096716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 00:07:49.960179 containerd[1541]: time="2025-08-13T00:07:49.960147525Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 990.24349ms" Aug 13 00:07:49.961271 containerd[1541]: time="2025-08-13T00:07:49.961080678Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 987.834932ms" Aug 13 00:07:50.034040 containerd[1541]: time="2025-08-13T00:07:50.033317760Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.059093345s" Aug 13 00:07:50.169208 kubelet[2245]: I0813 00:07:50.168874 2245 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:07:50.169618 kubelet[2245]: E0813 00:07:50.169263 2245 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.72:6443/api/v1/nodes\": dial tcp 10.0.0.72:6443: connect: connection refused" node="localhost" Aug 13 00:07:50.316848 containerd[1541]: time="2025-08-13T00:07:50.316521575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:07:50.316848 containerd[1541]: time="2025-08-13T00:07:50.316612849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:07:50.316848 containerd[1541]: time="2025-08-13T00:07:50.316630369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:07:50.317936 containerd[1541]: time="2025-08-13T00:07:50.317171548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:07:50.317936 containerd[1541]: time="2025-08-13T00:07:50.317229457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:07:50.317936 containerd[1541]: time="2025-08-13T00:07:50.317241430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:07:50.317936 containerd[1541]: time="2025-08-13T00:07:50.317338371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:07:50.317936 containerd[1541]: time="2025-08-13T00:07:50.317193099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:07:50.323202 containerd[1541]: time="2025-08-13T00:07:50.322456621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:07:50.323202 containerd[1541]: time="2025-08-13T00:07:50.322528858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:07:50.323202 containerd[1541]: time="2025-08-13T00:07:50.322545141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:07:50.323202 containerd[1541]: time="2025-08-13T00:07:50.322641205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:07:50.385277 containerd[1541]: time="2025-08-13T00:07:50.385207052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"998385fc180abdd06f447eb41aa6bd47e6021bfb7f3aed2dd032e9fe35f16154\"" Aug 13 00:07:50.388254 kubelet[2245]: E0813 00:07:50.387987 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:50.388414 containerd[1541]: time="2025-08-13T00:07:50.388132570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1a6c43050629e9486431e910e04842d6ffa104c15cc115401213d9db9f4a628\"" Aug 13 00:07:50.390153 kubelet[2245]: E0813 00:07:50.390120 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:50.392513 containerd[1541]: time="2025-08-13T00:07:50.392475450Z" level=info msg="CreateContainer within sandbox \"998385fc180abdd06f447eb41aa6bd47e6021bfb7f3aed2dd032e9fe35f16154\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 00:07:50.392889 containerd[1541]: time="2025-08-13T00:07:50.392742527Z" level=info msg="CreateContainer within sandbox \"b1a6c43050629e9486431e910e04842d6ffa104c15cc115401213d9db9f4a628\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 00:07:50.394753 containerd[1541]: time="2025-08-13T00:07:50.394713599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:713d11042bca2659b7c34acdc43c0bda,Namespace:kube-system,Attempt:0,} returns sandbox id \"61052a4be0f5bb8a03c390b0e32734e1b88768bc074f78531874bdc97cb66d7f\"" Aug 13 00:07:50.396756 kubelet[2245]: E0813 00:07:50.396281 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:50.398285 containerd[1541]: time="2025-08-13T00:07:50.398243633Z" level=info msg="CreateContainer within sandbox \"61052a4be0f5bb8a03c390b0e32734e1b88768bc074f78531874bdc97cb66d7f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 00:07:50.422284 containerd[1541]: time="2025-08-13T00:07:50.422219247Z" level=info msg="CreateContainer within sandbox \"b1a6c43050629e9486431e910e04842d6ffa104c15cc115401213d9db9f4a628\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7d14a38176f684899bfe74c19fcab5ec362e44ea1a34a6ebcc270d120bde83ca\"" Aug 13 00:07:50.423093 containerd[1541]: time="2025-08-13T00:07:50.423038797Z" level=info msg="StartContainer for \"7d14a38176f684899bfe74c19fcab5ec362e44ea1a34a6ebcc270d120bde83ca\"" Aug 13 00:07:50.431895 containerd[1541]: time="2025-08-13T00:07:50.431831276Z" level=info msg="CreateContainer within sandbox \"998385fc180abdd06f447eb41aa6bd47e6021bfb7f3aed2dd032e9fe35f16154\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5c5a6120a18db6f365a631746046a371cd2af6d58eaff1312075543cabb38884\"" Aug 13 00:07:50.432691 containerd[1541]: time="2025-08-13T00:07:50.432660644Z" level=info msg="StartContainer for \"5c5a6120a18db6f365a631746046a371cd2af6d58eaff1312075543cabb38884\"" Aug 13 00:07:50.441508 containerd[1541]: time="2025-08-13T00:07:50.441192510Z" level=info msg="CreateContainer within sandbox \"61052a4be0f5bb8a03c390b0e32734e1b88768bc074f78531874bdc97cb66d7f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"79628ed3d0d916e0b063d749a9c7348f8faa3950e8a95de073421e232bc93784\"" Aug 13 00:07:50.441777 containerd[1541]: time="2025-08-13T00:07:50.441743267Z" level=info msg="StartContainer for \"79628ed3d0d916e0b063d749a9c7348f8faa3950e8a95de073421e232bc93784\"" Aug 13 00:07:50.510239 kubelet[2245]: E0813 00:07:50.510169 2245 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.72:6443: connect: connection refused" logger="UnhandledError" Aug 13 00:07:50.554990 containerd[1541]: time="2025-08-13T00:07:50.554923374Z" level=info msg="StartContainer for \"5c5a6120a18db6f365a631746046a371cd2af6d58eaff1312075543cabb38884\" returns successfully" Aug 13 00:07:50.555522 containerd[1541]: time="2025-08-13T00:07:50.555381141Z" level=info msg="StartContainer for \"79628ed3d0d916e0b063d749a9c7348f8faa3950e8a95de073421e232bc93784\" returns successfully" Aug 13 00:07:50.556095 containerd[1541]: time="2025-08-13T00:07:50.555408000Z" level=info msg="StartContainer for \"7d14a38176f684899bfe74c19fcab5ec362e44ea1a34a6ebcc270d120bde83ca\" returns successfully" Aug 13 00:07:50.582947 kubelet[2245]: E0813 00:07:50.582654 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:50.585998 kubelet[2245]: E0813 00:07:50.585539 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:50.588484 kubelet[2245]: E0813 00:07:50.588382 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:51.591045 kubelet[2245]: E0813 00:07:51.591001 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:51.771498 kubelet[2245]: I0813 00:07:51.770685 2245 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:07:52.170861 kubelet[2245]: E0813 00:07:52.170828 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:53.344085 kubelet[2245]: E0813 00:07:53.343815 2245 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 00:07:53.506315 kubelet[2245]: I0813 00:07:53.506271 2245 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:07:53.536632 kubelet[2245]: I0813 00:07:53.536569 2245 apiserver.go:52] "Watching apiserver" Aug 13 00:07:53.637302 kubelet[2245]: I0813 00:07:53.637167 2245 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:07:54.326652 kubelet[2245]: E0813 00:07:54.326612 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:54.596372 kubelet[2245]: E0813 00:07:54.596173 2245 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:55.895014 systemd[1]: Reloading requested from client PID 2524 ('systemctl') (unit session-7.scope)... Aug 13 00:07:55.895030 systemd[1]: Reloading... Aug 13 00:07:55.970230 zram_generator::config[2566]: No configuration found. Aug 13 00:07:56.064465 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 00:07:56.125176 systemd[1]: Reloading finished in 229 ms. Aug 13 00:07:56.149383 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:07:56.165484 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 00:07:56.165790 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:07:56.174350 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 00:07:56.280810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 00:07:56.296506 (kubelet)[2615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 00:07:56.339363 kubelet[2615]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:07:56.339363 kubelet[2615]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 00:07:56.339363 kubelet[2615]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 00:07:56.339363 kubelet[2615]: I0813 00:07:56.338754 2615 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 00:07:56.346287 kubelet[2615]: I0813 00:07:56.346244 2615 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 00:07:56.346287 kubelet[2615]: I0813 00:07:56.346280 2615 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 00:07:56.346571 kubelet[2615]: I0813 00:07:56.346555 2615 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 00:07:56.348169 kubelet[2615]: I0813 00:07:56.348107 2615 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 00:07:56.350258 kubelet[2615]: I0813 00:07:56.350219 2615 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 00:07:56.353671 kubelet[2615]: E0813 00:07:56.353627 2615 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 00:07:56.353671 kubelet[2615]: I0813 00:07:56.353668 2615 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 00:07:56.356148 kubelet[2615]: I0813 00:07:56.356123 2615 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 00:07:56.356570 kubelet[2615]: I0813 00:07:56.356550 2615 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 00:07:56.356758 kubelet[2615]: I0813 00:07:56.356653 2615 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 00:07:56.356931 kubelet[2615]: I0813 00:07:56.356686 2615 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 00:07:56.356931 kubelet[2615]: I0813 00:07:56.356872 2615 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 00:07:56.356931 kubelet[2615]: I0813 00:07:56.356881 2615 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 00:07:56.356931 kubelet[2615]: I0813 00:07:56.356932 2615 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:07:56.357132 kubelet[2615]: I0813 00:07:56.357040 2615 kubelet.go:408] "Attempting to sync node with API server" Aug 13 00:07:56.357132 kubelet[2615]: I0813 00:07:56.357053 2615 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 00:07:56.357132 kubelet[2615]: I0813 00:07:56.357091 2615 kubelet.go:314] "Adding apiserver pod source" Aug 13 00:07:56.357132 kubelet[2615]: I0813 00:07:56.357107 2615 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 00:07:56.359417 kubelet[2615]: I0813 00:07:56.358873 2615 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 00:07:56.359518 kubelet[2615]: I0813 00:07:56.359506 2615 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 00:07:56.360020 kubelet[2615]: I0813 00:07:56.359994 2615 server.go:1274] "Started kubelet" Aug 13 00:07:56.363093 kubelet[2615]: I0813 00:07:56.360303 2615 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 00:07:56.363183 kubelet[2615]: I0813 00:07:56.360387 2615 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 00:07:56.365833 kubelet[2615]: I0813 00:07:56.363594 2615 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 00:07:56.366511 kubelet[2615]: I0813 00:07:56.366368 2615 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 00:07:56.366696 kubelet[2615]: I0813 00:07:56.366664 2615 server.go:449] "Adding debug handlers to kubelet server" Aug 13 00:07:56.369181 kubelet[2615]: I0813 00:07:56.369154 2615 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 00:07:56.369367 kubelet[2615]: E0813 00:07:56.369343 2615 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 00:07:56.372106 kubelet[2615]: I0813 00:07:56.369970 2615 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 00:07:56.372106 kubelet[2615]: I0813 00:07:56.371237 2615 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 00:07:56.380180 kubelet[2615]: I0813 00:07:56.377672 2615 factory.go:221] Registration of the systemd container factory successfully Aug 13 00:07:56.380180 kubelet[2615]: I0813 00:07:56.377839 2615 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 00:07:56.381546 kubelet[2615]: I0813 00:07:56.381499 2615 factory.go:221] Registration of the containerd container factory successfully Aug 13 00:07:56.384861 kubelet[2615]: I0813 00:07:56.384818 2615 reconciler.go:26] "Reconciler: start to sync state" Aug 13 00:07:56.387256 kubelet[2615]: I0813 00:07:56.387198 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 00:07:56.388275 kubelet[2615]: E0813 00:07:56.388241 2615 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 00:07:56.389882 kubelet[2615]: I0813 00:07:56.389826 2615 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 00:07:56.389882 kubelet[2615]: I0813 00:07:56.389857 2615 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 00:07:56.389882 kubelet[2615]: I0813 00:07:56.389887 2615 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 00:07:56.390001 kubelet[2615]: E0813 00:07:56.389937 2615 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 00:07:56.434547 kubelet[2615]: I0813 00:07:56.434059 2615 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 00:07:56.434547 kubelet[2615]: I0813 00:07:56.434104 2615 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 00:07:56.434547 kubelet[2615]: I0813 00:07:56.434128 2615 state_mem.go:36] "Initialized new in-memory state store" Aug 13 00:07:56.434547 kubelet[2615]: I0813 00:07:56.434285 2615 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 00:07:56.434547 kubelet[2615]: I0813 00:07:56.434295 2615 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 00:07:56.434547 kubelet[2615]: I0813 00:07:56.434315 2615 policy_none.go:49] "None policy: Start" Aug 13 00:07:56.436040 kubelet[2615]: I0813 00:07:56.435436 2615 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 00:07:56.436040 kubelet[2615]: I0813 00:07:56.435467 2615 state_mem.go:35] "Initializing new in-memory state store" Aug 13 00:07:56.436040 kubelet[2615]: I0813 00:07:56.435610 2615 state_mem.go:75] "Updated machine memory state" Aug 13 00:07:56.436794 kubelet[2615]: I0813 00:07:56.436769 2615 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 00:07:56.437023 kubelet[2615]: I0813 00:07:56.437006 2615 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 00:07:56.437054 kubelet[2615]: I0813 00:07:56.437026 2615 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 00:07:56.437342 kubelet[2615]: I0813 00:07:56.437327 2615 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 00:07:56.505120 kubelet[2615]: E0813 00:07:56.505048 2615 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:07:56.540788 kubelet[2615]: I0813 00:07:56.540530 2615 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 00:07:56.558099 kubelet[2615]: I0813 00:07:56.558053 2615 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 00:07:56.558278 kubelet[2615]: I0813 00:07:56.558160 2615 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 00:07:56.586295 kubelet[2615]: I0813 00:07:56.586254 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/713d11042bca2659b7c34acdc43c0bda-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"713d11042bca2659b7c34acdc43c0bda\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:07:56.586295 kubelet[2615]: I0813 00:07:56.586297 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/713d11042bca2659b7c34acdc43c0bda-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"713d11042bca2659b7c34acdc43c0bda\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:07:56.586465 kubelet[2615]: I0813 00:07:56.586320 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:56.586465 kubelet[2615]: I0813 00:07:56.586339 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:56.586465 kubelet[2615]: I0813 00:07:56.586381 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 00:07:56.586465 kubelet[2615]: I0813 00:07:56.586398 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/713d11042bca2659b7c34acdc43c0bda-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"713d11042bca2659b7c34acdc43c0bda\") " pod="kube-system/kube-apiserver-localhost" Aug 13 00:07:56.586465 kubelet[2615]: I0813 00:07:56.586413 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:56.586567 kubelet[2615]: I0813 00:07:56.586427 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:56.586567 kubelet[2615]: I0813 00:07:56.586449 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:56.806177 kubelet[2615]: E0813 00:07:56.805385 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:56.806177 kubelet[2615]: E0813 00:07:56.805410 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:56.806177 kubelet[2615]: E0813 00:07:56.805863 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:57.358284 kubelet[2615]: I0813 00:07:57.357992 2615 apiserver.go:52] "Watching apiserver" Aug 13 00:07:57.370329 kubelet[2615]: I0813 00:07:57.370283 2615 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 00:07:57.419015 kubelet[2615]: E0813 00:07:57.415478 2615 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 13 00:07:57.419015 kubelet[2615]: E0813 00:07:57.415664 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:57.422140 kubelet[2615]: E0813 00:07:57.420463 2615 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 00:07:57.422140 kubelet[2615]: E0813 00:07:57.420663 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:57.422140 kubelet[2615]: E0813 00:07:57.421244 2615 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 00:07:57.422140 kubelet[2615]: E0813 00:07:57.421408 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:57.434096 kubelet[2615]: I0813 00:07:57.433669 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.433478213 podStartE2EDuration="1.433478213s" podCreationTimestamp="2025-08-13 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:07:57.420803949 +0000 UTC m=+1.121013234" watchObservedRunningTime="2025-08-13 00:07:57.433478213 +0000 UTC m=+1.133687498" Aug 13 00:07:57.434096 kubelet[2615]: I0813 00:07:57.433971 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.433950155 podStartE2EDuration="1.433950155s" podCreationTimestamp="2025-08-13 00:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:07:57.433274713 +0000 UTC m=+1.133483998" watchObservedRunningTime="2025-08-13 00:07:57.433950155 +0000 UTC m=+1.134159520" Aug 13 00:07:57.448416 kubelet[2615]: I0813 00:07:57.448147 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.448127968 podStartE2EDuration="3.448127968s" podCreationTimestamp="2025-08-13 00:07:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:07:57.447153271 +0000 UTC m=+1.147362555" watchObservedRunningTime="2025-08-13 00:07:57.448127968 +0000 UTC m=+1.148337253" Aug 13 00:07:58.408792 kubelet[2615]: E0813 00:07:58.408487 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:58.408792 kubelet[2615]: E0813 00:07:58.408537 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:58.409553 kubelet[2615]: E0813 00:07:58.409530 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:07:59.410415 kubelet[2615]: E0813 00:07:59.410379 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:00.151916 kubelet[2615]: E0813 00:08:00.151541 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:00.412607 kubelet[2615]: E0813 00:08:00.412463 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:01.385413 kubelet[2615]: E0813 00:08:01.385376 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:01.413802 kubelet[2615]: E0813 00:08:01.413772 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:01.716502 kubelet[2615]: I0813 00:08:01.716237 2615 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 00:08:01.717005 containerd[1541]: time="2025-08-13T00:08:01.716843839Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 00:08:01.718466 kubelet[2615]: I0813 00:08:01.717175 2615 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 00:08:02.414613 kubelet[2615]: I0813 00:08:02.414575 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/01d869f8-0bec-4624-abb8-2a94babf717f-kube-proxy\") pod \"kube-proxy-mrmzx\" (UID: \"01d869f8-0bec-4624-abb8-2a94babf717f\") " pod="kube-system/kube-proxy-mrmzx" Aug 13 00:08:02.414613 kubelet[2615]: I0813 00:08:02.414616 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01d869f8-0bec-4624-abb8-2a94babf717f-lib-modules\") pod \"kube-proxy-mrmzx\" (UID: \"01d869f8-0bec-4624-abb8-2a94babf717f\") " pod="kube-system/kube-proxy-mrmzx" Aug 13 00:08:02.415065 kubelet[2615]: I0813 00:08:02.414639 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28m79\" (UniqueName: \"kubernetes.io/projected/01d869f8-0bec-4624-abb8-2a94babf717f-kube-api-access-28m79\") pod \"kube-proxy-mrmzx\" (UID: \"01d869f8-0bec-4624-abb8-2a94babf717f\") " pod="kube-system/kube-proxy-mrmzx" Aug 13 00:08:02.415065 kubelet[2615]: I0813 00:08:02.414660 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01d869f8-0bec-4624-abb8-2a94babf717f-xtables-lock\") pod \"kube-proxy-mrmzx\" (UID: \"01d869f8-0bec-4624-abb8-2a94babf717f\") " pod="kube-system/kube-proxy-mrmzx" Aug 13 00:08:02.523755 kubelet[2615]: E0813 00:08:02.523636 2615 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 00:08:02.523755 kubelet[2615]: E0813 00:08:02.523675 2615 projected.go:194] Error preparing data for projected volume kube-api-access-28m79 for pod kube-system/kube-proxy-mrmzx: configmap "kube-root-ca.crt" not found Aug 13 00:08:02.523755 kubelet[2615]: E0813 00:08:02.523737 2615 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/01d869f8-0bec-4624-abb8-2a94babf717f-kube-api-access-28m79 podName:01d869f8-0bec-4624-abb8-2a94babf717f nodeName:}" failed. No retries permitted until 2025-08-13 00:08:03.023715206 +0000 UTC m=+6.723924491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-28m79" (UniqueName: "kubernetes.io/projected/01d869f8-0bec-4624-abb8-2a94babf717f-kube-api-access-28m79") pod "kube-proxy-mrmzx" (UID: "01d869f8-0bec-4624-abb8-2a94babf717f") : configmap "kube-root-ca.crt" not found Aug 13 00:08:03.018015 kubelet[2615]: I0813 00:08:03.017952 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfrzk\" (UniqueName: \"kubernetes.io/projected/d8ae7d86-6e6d-4f30-9d20-f9fe7e174c5c-kube-api-access-wfrzk\") pod \"tigera-operator-5bf8dfcb4-rq5c9\" (UID: \"d8ae7d86-6e6d-4f30-9d20-f9fe7e174c5c\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-rq5c9" Aug 13 00:08:03.018185 kubelet[2615]: I0813 00:08:03.018049 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d8ae7d86-6e6d-4f30-9d20-f9fe7e174c5c-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-rq5c9\" (UID: \"d8ae7d86-6e6d-4f30-9d20-f9fe7e174c5c\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-rq5c9" Aug 13 00:08:03.218334 containerd[1541]: time="2025-08-13T00:08:03.217952981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-rq5c9,Uid:d8ae7d86-6e6d-4f30-9d20-f9fe7e174c5c,Namespace:tigera-operator,Attempt:0,}" Aug 13 00:08:03.226136 kubelet[2615]: E0813 00:08:03.225914 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:03.227602 containerd[1541]: time="2025-08-13T00:08:03.226405832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrmzx,Uid:01d869f8-0bec-4624-abb8-2a94babf717f,Namespace:kube-system,Attempt:0,}" Aug 13 00:08:03.257651 containerd[1541]: time="2025-08-13T00:08:03.257545226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:03.257651 containerd[1541]: time="2025-08-13T00:08:03.257598922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:03.257651 containerd[1541]: time="2025-08-13T00:08:03.257610326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:03.257874 containerd[1541]: time="2025-08-13T00:08:03.257700273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:03.274332 containerd[1541]: time="2025-08-13T00:08:03.274130512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:03.274332 containerd[1541]: time="2025-08-13T00:08:03.274211737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:03.274332 containerd[1541]: time="2025-08-13T00:08:03.274223900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:03.274822 containerd[1541]: time="2025-08-13T00:08:03.274470055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:03.311142 containerd[1541]: time="2025-08-13T00:08:03.311059427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mrmzx,Uid:01d869f8-0bec-4624-abb8-2a94babf717f,Namespace:kube-system,Attempt:0,} returns sandbox id \"3982626bf296b3a94cbd890f0493539b3740a3bdc9cf9e1fd7f3e76b0d4a5bcb\"" Aug 13 00:08:03.312291 kubelet[2615]: E0813 00:08:03.311883 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:03.315985 containerd[1541]: time="2025-08-13T00:08:03.315576921Z" level=info msg="CreateContainer within sandbox \"3982626bf296b3a94cbd890f0493539b3740a3bdc9cf9e1fd7f3e76b0d4a5bcb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 00:08:03.316331 containerd[1541]: time="2025-08-13T00:08:03.316186267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-rq5c9,Uid:d8ae7d86-6e6d-4f30-9d20-f9fe7e174c5c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"90eeb328119bad4588c2831424f156a3261c0407eb27a4c6b25e3fcd4296bdbf\"" Aug 13 00:08:03.318264 containerd[1541]: time="2025-08-13T00:08:03.318152385Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 00:08:03.342973 containerd[1541]: time="2025-08-13T00:08:03.342916039Z" level=info msg="CreateContainer within sandbox \"3982626bf296b3a94cbd890f0493539b3740a3bdc9cf9e1fd7f3e76b0d4a5bcb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8e6f7f7c6d71704f33ffb19864b94b8ef2209186d5091730badd0c8302832639\"" Aug 13 00:08:03.343827 containerd[1541]: time="2025-08-13T00:08:03.343781902Z" level=info msg="StartContainer for \"8e6f7f7c6d71704f33ffb19864b94b8ef2209186d5091730badd0c8302832639\"" Aug 13 00:08:03.417809 containerd[1541]: time="2025-08-13T00:08:03.417637091Z" level=info msg="StartContainer for \"8e6f7f7c6d71704f33ffb19864b94b8ef2209186d5091730badd0c8302832639\" returns successfully" Aug 13 00:08:04.311565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127889750.mount: Deactivated successfully. Aug 13 00:08:04.426799 kubelet[2615]: E0813 00:08:04.426740 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:04.437209 kubelet[2615]: I0813 00:08:04.437141 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mrmzx" podStartSLOduration=2.437117437 podStartE2EDuration="2.437117437s" podCreationTimestamp="2025-08-13 00:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:08:04.436542232 +0000 UTC m=+8.136751517" watchObservedRunningTime="2025-08-13 00:08:04.437117437 +0000 UTC m=+8.137326722" Aug 13 00:08:05.119213 containerd[1541]: time="2025-08-13T00:08:05.119149284Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:05.127828 containerd[1541]: time="2025-08-13T00:08:05.127726541Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 13 00:08:05.140628 containerd[1541]: time="2025-08-13T00:08:05.140573763Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:05.155756 containerd[1541]: time="2025-08-13T00:08:05.155703086Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:05.156427 containerd[1541]: time="2025-08-13T00:08:05.156397515Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.838166187s" Aug 13 00:08:05.156500 containerd[1541]: time="2025-08-13T00:08:05.156432125Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 13 00:08:05.160662 containerd[1541]: time="2025-08-13T00:08:05.160036907Z" level=info msg="CreateContainer within sandbox \"90eeb328119bad4588c2831424f156a3261c0407eb27a4c6b25e3fcd4296bdbf\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 00:08:05.193796 containerd[1541]: time="2025-08-13T00:08:05.193739052Z" level=info msg="CreateContainer within sandbox \"90eeb328119bad4588c2831424f156a3261c0407eb27a4c6b25e3fcd4296bdbf\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c12d3f1ea3bb38969a0db7ee3a39a2b6231ef0d4982aabeaa72ddd33c9926058\"" Aug 13 00:08:05.194663 containerd[1541]: time="2025-08-13T00:08:05.194638057Z" level=info msg="StartContainer for \"c12d3f1ea3bb38969a0db7ee3a39a2b6231ef0d4982aabeaa72ddd33c9926058\"" Aug 13 00:08:05.241063 containerd[1541]: time="2025-08-13T00:08:05.241020137Z" level=info msg="StartContainer for \"c12d3f1ea3bb38969a0db7ee3a39a2b6231ef0d4982aabeaa72ddd33c9926058\" returns successfully" Aug 13 00:08:05.449313 kubelet[2615]: I0813 00:08:05.449033 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-rq5c9" podStartSLOduration=1.60936561 podStartE2EDuration="3.449015783s" podCreationTimestamp="2025-08-13 00:08:02 +0000 UTC" firstStartedPulling="2025-08-13 00:08:03.317518352 +0000 UTC m=+7.017727637" lastFinishedPulling="2025-08-13 00:08:05.157168525 +0000 UTC m=+8.857377810" observedRunningTime="2025-08-13 00:08:05.448794883 +0000 UTC m=+9.149004128" watchObservedRunningTime="2025-08-13 00:08:05.449015783 +0000 UTC m=+9.149225068" Aug 13 00:08:09.329410 kubelet[2615]: E0813 00:08:09.329363 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:11.031020 sudo[1747]: pam_unix(sudo:session): session closed for user root Aug 13 00:08:11.049873 sshd[1741]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:11.057493 systemd[1]: sshd@6-10.0.0.72:22-10.0.0.1:34884.service: Deactivated successfully. Aug 13 00:08:11.062979 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 00:08:11.063893 systemd-logind[1517]: Session 7 logged out. Waiting for processes to exit. Aug 13 00:08:11.073928 systemd-logind[1517]: Removed session 7. Aug 13 00:08:12.064116 update_engine[1524]: I20250813 00:08:12.063098 1524 update_attempter.cc:509] Updating boot flags... Aug 13 00:08:12.154379 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3022) Aug 13 00:08:12.212301 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3024) Aug 13 00:08:12.266992 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (3024) Aug 13 00:08:16.110001 kubelet[2615]: I0813 00:08:16.109947 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9026df0b-8746-4473-b208-26f6b123b82b-tigera-ca-bundle\") pod \"calico-typha-7cf7df9fc4-tf75v\" (UID: \"9026df0b-8746-4473-b208-26f6b123b82b\") " pod="calico-system/calico-typha-7cf7df9fc4-tf75v" Aug 13 00:08:16.110001 kubelet[2615]: I0813 00:08:16.109995 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9026df0b-8746-4473-b208-26f6b123b82b-typha-certs\") pod \"calico-typha-7cf7df9fc4-tf75v\" (UID: \"9026df0b-8746-4473-b208-26f6b123b82b\") " pod="calico-system/calico-typha-7cf7df9fc4-tf75v" Aug 13 00:08:16.110440 kubelet[2615]: I0813 00:08:16.110022 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm2nm\" (UniqueName: \"kubernetes.io/projected/9026df0b-8746-4473-b208-26f6b123b82b-kube-api-access-tm2nm\") pod \"calico-typha-7cf7df9fc4-tf75v\" (UID: \"9026df0b-8746-4473-b208-26f6b123b82b\") " pod="calico-system/calico-typha-7cf7df9fc4-tf75v" Aug 13 00:08:16.379178 kubelet[2615]: E0813 00:08:16.376658 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:16.382497 containerd[1541]: time="2025-08-13T00:08:16.382452143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf7df9fc4-tf75v,Uid:9026df0b-8746-4473-b208-26f6b123b82b,Namespace:calico-system,Attempt:0,}" Aug 13 00:08:16.469272 containerd[1541]: time="2025-08-13T00:08:16.468971322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:16.469272 containerd[1541]: time="2025-08-13T00:08:16.469143469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:16.469272 containerd[1541]: time="2025-08-13T00:08:16.469186156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:16.470314 containerd[1541]: time="2025-08-13T00:08:16.470178430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:16.511452 kubelet[2615]: I0813 00:08:16.510673 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f40015f0-352a-48e2-b7ab-17f446fd0a7a-cni-net-dir\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511452 kubelet[2615]: I0813 00:08:16.510723 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctgpm\" (UniqueName: \"kubernetes.io/projected/f40015f0-352a-48e2-b7ab-17f446fd0a7a-kube-api-access-ctgpm\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511452 kubelet[2615]: I0813 00:08:16.510745 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f40015f0-352a-48e2-b7ab-17f446fd0a7a-cni-log-dir\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511452 kubelet[2615]: I0813 00:08:16.510761 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f40015f0-352a-48e2-b7ab-17f446fd0a7a-flexvol-driver-host\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511452 kubelet[2615]: I0813 00:08:16.510777 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f40015f0-352a-48e2-b7ab-17f446fd0a7a-lib-modules\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511694 kubelet[2615]: I0813 00:08:16.510795 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f40015f0-352a-48e2-b7ab-17f446fd0a7a-var-lib-calico\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511694 kubelet[2615]: I0813 00:08:16.510811 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f40015f0-352a-48e2-b7ab-17f446fd0a7a-cni-bin-dir\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511694 kubelet[2615]: I0813 00:08:16.510826 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f40015f0-352a-48e2-b7ab-17f446fd0a7a-policysync\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511976 kubelet[2615]: I0813 00:08:16.511819 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f40015f0-352a-48e2-b7ab-17f446fd0a7a-node-certs\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511976 kubelet[2615]: I0813 00:08:16.511872 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f40015f0-352a-48e2-b7ab-17f446fd0a7a-tigera-ca-bundle\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511976 kubelet[2615]: I0813 00:08:16.511892 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f40015f0-352a-48e2-b7ab-17f446fd0a7a-var-run-calico\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.511976 kubelet[2615]: I0813 00:08:16.511910 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f40015f0-352a-48e2-b7ab-17f446fd0a7a-xtables-lock\") pod \"calico-node-69zbs\" (UID: \"f40015f0-352a-48e2-b7ab-17f446fd0a7a\") " pod="calico-system/calico-node-69zbs" Aug 13 00:08:16.553178 containerd[1541]: time="2025-08-13T00:08:16.553112651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf7df9fc4-tf75v,Uid:9026df0b-8746-4473-b208-26f6b123b82b,Namespace:calico-system,Attempt:0,} returns sandbox id \"4145dea7af7853a21e13a0d0c72500fc876dd7e2842ecdf37142e8f210782222\"" Aug 13 00:08:16.556385 kubelet[2615]: E0813 00:08:16.556339 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:16.559293 containerd[1541]: time="2025-08-13T00:08:16.559085500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 00:08:16.626701 kubelet[2615]: E0813 00:08:16.626667 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.626701 kubelet[2615]: W0813 00:08:16.626694 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.626868 kubelet[2615]: E0813 00:08:16.626718 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.734654 kubelet[2615]: E0813 00:08:16.733736 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wt4m" podUID="edf56ce2-0695-4a38-a297-9fcd045b8bd5" Aug 13 00:08:16.798326 containerd[1541]: time="2025-08-13T00:08:16.798278988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-69zbs,Uid:f40015f0-352a-48e2-b7ab-17f446fd0a7a,Namespace:calico-system,Attempt:0,}" Aug 13 00:08:16.806506 kubelet[2615]: E0813 00:08:16.806378 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.806506 kubelet[2615]: W0813 00:08:16.806404 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.806506 kubelet[2615]: E0813 00:08:16.806431 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.806836 kubelet[2615]: E0813 00:08:16.806728 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.806836 kubelet[2615]: W0813 00:08:16.806743 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.806836 kubelet[2615]: E0813 00:08:16.806753 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.807158 kubelet[2615]: E0813 00:08:16.807025 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.807158 kubelet[2615]: W0813 00:08:16.807039 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.807158 kubelet[2615]: E0813 00:08:16.807051 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.807341 kubelet[2615]: E0813 00:08:16.807329 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.807397 kubelet[2615]: W0813 00:08:16.807386 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.807553 kubelet[2615]: E0813 00:08:16.807446 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.807662 kubelet[2615]: E0813 00:08:16.807650 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.807717 kubelet[2615]: W0813 00:08:16.807706 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.807772 kubelet[2615]: E0813 00:08:16.807762 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.807985 kubelet[2615]: E0813 00:08:16.807971 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.807985 kubelet[2615]: W0813 00:08:16.808020 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.807985 kubelet[2615]: E0813 00:08:16.808033 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.808301 kubelet[2615]: E0813 00:08:16.808287 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.808350 kubelet[2615]: W0813 00:08:16.808341 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.808399 kubelet[2615]: E0813 00:08:16.808388 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.808676 kubelet[2615]: E0813 00:08:16.808664 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.808832 kubelet[2615]: W0813 00:08:16.808726 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.808832 kubelet[2615]: E0813 00:08:16.808741 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.808964 kubelet[2615]: E0813 00:08:16.808953 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.809024 kubelet[2615]: W0813 00:08:16.809003 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.809108 kubelet[2615]: E0813 00:08:16.809096 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.809360 kubelet[2615]: E0813 00:08:16.809325 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.809590 kubelet[2615]: W0813 00:08:16.809465 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.809590 kubelet[2615]: E0813 00:08:16.809484 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.809744 kubelet[2615]: E0813 00:08:16.809731 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.809799 kubelet[2615]: W0813 00:08:16.809789 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.809853 kubelet[2615]: E0813 00:08:16.809844 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.810104 kubelet[2615]: E0813 00:08:16.810089 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.810309 kubelet[2615]: W0813 00:08:16.810195 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.810309 kubelet[2615]: E0813 00:08:16.810211 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.810449 kubelet[2615]: E0813 00:08:16.810437 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.810499 kubelet[2615]: W0813 00:08:16.810489 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.810549 kubelet[2615]: E0813 00:08:16.810540 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.810773 kubelet[2615]: E0813 00:08:16.810761 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.810832 kubelet[2615]: W0813 00:08:16.810822 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.810965 kubelet[2615]: E0813 00:08:16.810885 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.811443 kubelet[2615]: E0813 00:08:16.811128 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.811698 kubelet[2615]: W0813 00:08:16.811549 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.811698 kubelet[2615]: E0813 00:08:16.811571 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.811951 kubelet[2615]: E0813 00:08:16.811936 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.812184 kubelet[2615]: W0813 00:08:16.812035 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.812184 kubelet[2615]: E0813 00:08:16.812055 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.813805 kubelet[2615]: E0813 00:08:16.813672 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.813805 kubelet[2615]: W0813 00:08:16.813688 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.813805 kubelet[2615]: E0813 00:08:16.813701 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.814004 kubelet[2615]: E0813 00:08:16.813991 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.814114 kubelet[2615]: W0813 00:08:16.814068 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.814190 kubelet[2615]: E0813 00:08:16.814157 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.814679 kubelet[2615]: E0813 00:08:16.814542 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.814679 kubelet[2615]: W0813 00:08:16.814555 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.814679 kubelet[2615]: E0813 00:08:16.814571 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.814927 kubelet[2615]: E0813 00:08:16.814912 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.814995 kubelet[2615]: W0813 00:08:16.814983 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.815099 kubelet[2615]: E0813 00:08:16.815069 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.815513 kubelet[2615]: E0813 00:08:16.815498 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.815600 kubelet[2615]: W0813 00:08:16.815588 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.815787 kubelet[2615]: E0813 00:08:16.815651 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.815787 kubelet[2615]: I0813 00:08:16.815690 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/edf56ce2-0695-4a38-a297-9fcd045b8bd5-varrun\") pod \"csi-node-driver-5wt4m\" (UID: \"edf56ce2-0695-4a38-a297-9fcd045b8bd5\") " pod="calico-system/csi-node-driver-5wt4m" Aug 13 00:08:16.816159 kubelet[2615]: E0813 00:08:16.815941 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.816159 kubelet[2615]: W0813 00:08:16.815958 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.816159 kubelet[2615]: E0813 00:08:16.815976 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.816159 kubelet[2615]: I0813 00:08:16.815996 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/edf56ce2-0695-4a38-a297-9fcd045b8bd5-kubelet-dir\") pod \"csi-node-driver-5wt4m\" (UID: \"edf56ce2-0695-4a38-a297-9fcd045b8bd5\") " pod="calico-system/csi-node-driver-5wt4m" Aug 13 00:08:16.816366 kubelet[2615]: E0813 00:08:16.816352 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.816430 kubelet[2615]: W0813 00:08:16.816419 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.816503 kubelet[2615]: E0813 00:08:16.816492 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.816575 kubelet[2615]: I0813 00:08:16.816563 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/edf56ce2-0695-4a38-a297-9fcd045b8bd5-registration-dir\") pod \"csi-node-driver-5wt4m\" (UID: \"edf56ce2-0695-4a38-a297-9fcd045b8bd5\") " pod="calico-system/csi-node-driver-5wt4m" Aug 13 00:08:16.816795 kubelet[2615]: E0813 00:08:16.816771 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.816826 kubelet[2615]: W0813 00:08:16.816792 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.816857 kubelet[2615]: E0813 00:08:16.816828 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.817125 kubelet[2615]: E0813 00:08:16.817108 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.817125 kubelet[2615]: W0813 00:08:16.817123 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.817221 kubelet[2615]: E0813 00:08:16.817140 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.817463 kubelet[2615]: E0813 00:08:16.817444 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.817488 kubelet[2615]: W0813 00:08:16.817464 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.817488 kubelet[2615]: E0813 00:08:16.817479 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.817713 kubelet[2615]: E0813 00:08:16.817700 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.817743 kubelet[2615]: W0813 00:08:16.817713 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.817743 kubelet[2615]: E0813 00:08:16.817728 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.817968 kubelet[2615]: E0813 00:08:16.817954 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.818000 kubelet[2615]: W0813 00:08:16.817967 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.818000 kubelet[2615]: E0813 00:08:16.817988 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.818092 kubelet[2615]: I0813 00:08:16.818019 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/edf56ce2-0695-4a38-a297-9fcd045b8bd5-socket-dir\") pod \"csi-node-driver-5wt4m\" (UID: \"edf56ce2-0695-4a38-a297-9fcd045b8bd5\") " pod="calico-system/csi-node-driver-5wt4m" Aug 13 00:08:16.818282 kubelet[2615]: E0813 00:08:16.818267 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.818337 kubelet[2615]: W0813 00:08:16.818283 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.818337 kubelet[2615]: E0813 00:08:16.818311 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.818383 kubelet[2615]: I0813 00:08:16.818349 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46tq8\" (UniqueName: \"kubernetes.io/projected/edf56ce2-0695-4a38-a297-9fcd045b8bd5-kube-api-access-46tq8\") pod \"csi-node-driver-5wt4m\" (UID: \"edf56ce2-0695-4a38-a297-9fcd045b8bd5\") " pod="calico-system/csi-node-driver-5wt4m" Aug 13 00:08:16.818575 kubelet[2615]: E0813 00:08:16.818556 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.818605 kubelet[2615]: W0813 00:08:16.818574 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.818751 kubelet[2615]: E0813 00:08:16.818630 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.818852 kubelet[2615]: E0813 00:08:16.818837 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.818852 kubelet[2615]: W0813 00:08:16.818850 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.818905 kubelet[2615]: E0813 00:08:16.818866 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.819120 kubelet[2615]: E0813 00:08:16.819104 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.819162 kubelet[2615]: W0813 00:08:16.819120 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.819162 kubelet[2615]: E0813 00:08:16.819136 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.819324 kubelet[2615]: E0813 00:08:16.819309 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.819367 kubelet[2615]: W0813 00:08:16.819324 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.819367 kubelet[2615]: E0813 00:08:16.819334 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.819687 kubelet[2615]: E0813 00:08:16.819660 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.819734 kubelet[2615]: W0813 00:08:16.819696 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.819734 kubelet[2615]: E0813 00:08:16.819711 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.820172 kubelet[2615]: E0813 00:08:16.820123 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.820172 kubelet[2615]: W0813 00:08:16.820143 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.820172 kubelet[2615]: E0813 00:08:16.820159 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.821195 containerd[1541]: time="2025-08-13T00:08:16.820904308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:16.821195 containerd[1541]: time="2025-08-13T00:08:16.820986801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:16.821195 containerd[1541]: time="2025-08-13T00:08:16.820998523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:16.822667 containerd[1541]: time="2025-08-13T00:08:16.822590930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:16.873051 containerd[1541]: time="2025-08-13T00:08:16.872992971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-69zbs,Uid:f40015f0-352a-48e2-b7ab-17f446fd0a7a,Namespace:calico-system,Attempt:0,} returns sandbox id \"5dd43260499eff6a45a564be591e925f80c262a811e103e5358ca3232f84df47\"" Aug 13 00:08:16.919092 kubelet[2615]: E0813 00:08:16.919035 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.919092 kubelet[2615]: W0813 00:08:16.919062 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.919092 kubelet[2615]: E0813 00:08:16.919100 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.919398 kubelet[2615]: E0813 00:08:16.919366 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.919398 kubelet[2615]: W0813 00:08:16.919379 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.919398 kubelet[2615]: E0813 00:08:16.919396 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.919682 kubelet[2615]: E0813 00:08:16.919662 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.919732 kubelet[2615]: W0813 00:08:16.919683 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.919732 kubelet[2615]: E0813 00:08:16.919703 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.919939 kubelet[2615]: E0813 00:08:16.919925 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.919974 kubelet[2615]: W0813 00:08:16.919939 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.919974 kubelet[2615]: E0813 00:08:16.919951 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.920153 kubelet[2615]: E0813 00:08:16.920141 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.920153 kubelet[2615]: W0813 00:08:16.920153 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.920216 kubelet[2615]: E0813 00:08:16.920166 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.923218 kubelet[2615]: E0813 00:08:16.923184 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.923218 kubelet[2615]: W0813 00:08:16.923210 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.923502 kubelet[2615]: E0813 00:08:16.923328 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.924112 kubelet[2615]: E0813 00:08:16.923889 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.924112 kubelet[2615]: W0813 00:08:16.923907 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.924112 kubelet[2615]: E0813 00:08:16.923925 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.924380 kubelet[2615]: E0813 00:08:16.924366 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.924446 kubelet[2615]: W0813 00:08:16.924434 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.924544 kubelet[2615]: E0813 00:08:16.924523 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.925052 kubelet[2615]: E0813 00:08:16.924910 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.925052 kubelet[2615]: W0813 00:08:16.924927 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.925561 kubelet[2615]: E0813 00:08:16.925361 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.925561 kubelet[2615]: E0813 00:08:16.925451 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.925561 kubelet[2615]: W0813 00:08:16.925459 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.925561 kubelet[2615]: E0813 00:08:16.925478 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.925770 kubelet[2615]: E0813 00:08:16.925757 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.925834 kubelet[2615]: W0813 00:08:16.925822 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.925959 kubelet[2615]: E0813 00:08:16.925929 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.926267 kubelet[2615]: E0813 00:08:16.926187 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.926267 kubelet[2615]: W0813 00:08:16.926204 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.926267 kubelet[2615]: E0813 00:08:16.926237 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.926805 kubelet[2615]: E0813 00:08:16.926733 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.926805 kubelet[2615]: W0813 00:08:16.926749 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.926805 kubelet[2615]: E0813 00:08:16.926792 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.927206 kubelet[2615]: E0813 00:08:16.927122 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.927206 kubelet[2615]: W0813 00:08:16.927138 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.927206 kubelet[2615]: E0813 00:08:16.927180 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.927628 kubelet[2615]: E0813 00:08:16.927517 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.927628 kubelet[2615]: W0813 00:08:16.927536 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.927628 kubelet[2615]: E0813 00:08:16.927565 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.927884 kubelet[2615]: E0813 00:08:16.927785 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.927884 kubelet[2615]: W0813 00:08:16.927797 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.927884 kubelet[2615]: E0813 00:08:16.927824 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.928057 kubelet[2615]: E0813 00:08:16.928042 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.928206 kubelet[2615]: W0813 00:08:16.928129 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.928206 kubelet[2615]: E0813 00:08:16.928161 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.928519 kubelet[2615]: E0813 00:08:16.928457 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.928519 kubelet[2615]: W0813 00:08:16.928473 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.928519 kubelet[2615]: E0813 00:08:16.928498 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.928864 kubelet[2615]: E0813 00:08:16.928763 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.928864 kubelet[2615]: W0813 00:08:16.928783 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.928864 kubelet[2615]: E0813 00:08:16.928813 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.929232 kubelet[2615]: E0813 00:08:16.929170 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.929232 kubelet[2615]: W0813 00:08:16.929185 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.929232 kubelet[2615]: E0813 00:08:16.929215 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.930265 kubelet[2615]: E0813 00:08:16.930222 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.930265 kubelet[2615]: W0813 00:08:16.930248 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.930917 kubelet[2615]: E0813 00:08:16.930408 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.930917 kubelet[2615]: E0813 00:08:16.930566 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.930917 kubelet[2615]: W0813 00:08:16.930578 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.930917 kubelet[2615]: E0813 00:08:16.930644 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.930917 kubelet[2615]: E0813 00:08:16.930869 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.930917 kubelet[2615]: W0813 00:08:16.930880 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.931176 kubelet[2615]: E0813 00:08:16.930968 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.931324 kubelet[2615]: E0813 00:08:16.931304 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.931359 kubelet[2615]: W0813 00:08:16.931323 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.931494 kubelet[2615]: E0813 00:08:16.931365 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.931762 kubelet[2615]: E0813 00:08:16.931745 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.931806 kubelet[2615]: W0813 00:08:16.931762 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.931806 kubelet[2615]: E0813 00:08:16.931776 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:16.940992 kubelet[2615]: E0813 00:08:16.940632 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:16.940992 kubelet[2615]: W0813 00:08:16.940660 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:16.940992 kubelet[2615]: E0813 00:08:16.940680 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:17.219146 systemd[1]: run-containerd-runc-k8s.io-4145dea7af7853a21e13a0d0c72500fc876dd7e2842ecdf37142e8f210782222-runc.lDHxyC.mount: Deactivated successfully. Aug 13 00:08:17.675359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094171768.mount: Deactivated successfully. Aug 13 00:08:17.976416 containerd[1541]: time="2025-08-13T00:08:17.976115593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 13 00:08:17.984519 containerd[1541]: time="2025-08-13T00:08:17.984221877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.417960141s" Aug 13 00:08:17.984519 containerd[1541]: time="2025-08-13T00:08:17.984388182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 13 00:08:17.985463 containerd[1541]: time="2025-08-13T00:08:17.985364567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 00:08:17.987729 containerd[1541]: time="2025-08-13T00:08:17.987678670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:17.988518 containerd[1541]: time="2025-08-13T00:08:17.988485630Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:17.989096 containerd[1541]: time="2025-08-13T00:08:17.989047834Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:18.006873 containerd[1541]: time="2025-08-13T00:08:18.006821635Z" level=info msg="CreateContainer within sandbox \"4145dea7af7853a21e13a0d0c72500fc876dd7e2842ecdf37142e8f210782222\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 00:08:18.017321 containerd[1541]: time="2025-08-13T00:08:18.017263436Z" level=info msg="CreateContainer within sandbox \"4145dea7af7853a21e13a0d0c72500fc876dd7e2842ecdf37142e8f210782222\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7e766dfd7cc53596fda374aaa0e056caf1734f2a1cee278502a7ab4e1975242c\"" Aug 13 00:08:18.018433 containerd[1541]: time="2025-08-13T00:08:18.017770868Z" level=info msg="StartContainer for \"7e766dfd7cc53596fda374aaa0e056caf1734f2a1cee278502a7ab4e1975242c\"" Aug 13 00:08:18.142984 containerd[1541]: time="2025-08-13T00:08:18.142870654Z" level=info msg="StartContainer for \"7e766dfd7cc53596fda374aaa0e056caf1734f2a1cee278502a7ab4e1975242c\" returns successfully" Aug 13 00:08:18.390866 kubelet[2615]: E0813 00:08:18.390820 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wt4m" podUID="edf56ce2-0695-4a38-a297-9fcd045b8bd5" Aug 13 00:08:18.478898 kubelet[2615]: E0813 00:08:18.478857 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:18.519268 kubelet[2615]: I0813 00:08:18.519006 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cf7df9fc4-tf75v" podStartSLOduration=1.092060118 podStartE2EDuration="2.518986687s" podCreationTimestamp="2025-08-13 00:08:16 +0000 UTC" firstStartedPulling="2025-08-13 00:08:16.558243489 +0000 UTC m=+20.258452774" lastFinishedPulling="2025-08-13 00:08:17.985170058 +0000 UTC m=+21.685379343" observedRunningTime="2025-08-13 00:08:18.515837201 +0000 UTC m=+22.216046486" watchObservedRunningTime="2025-08-13 00:08:18.518986687 +0000 UTC m=+22.219195972" Aug 13 00:08:18.528309 kubelet[2615]: E0813 00:08:18.528267 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.528309 kubelet[2615]: W0813 00:08:18.528295 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.528309 kubelet[2615]: E0813 00:08:18.528319 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.528589 kubelet[2615]: E0813 00:08:18.528562 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.528589 kubelet[2615]: W0813 00:08:18.528579 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.528648 kubelet[2615]: E0813 00:08:18.528595 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.528800 kubelet[2615]: E0813 00:08:18.528779 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.528800 kubelet[2615]: W0813 00:08:18.528794 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.528858 kubelet[2615]: E0813 00:08:18.528803 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.529018 kubelet[2615]: E0813 00:08:18.528999 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.529018 kubelet[2615]: W0813 00:08:18.529012 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.529067 kubelet[2615]: E0813 00:08:18.529022 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.529493 kubelet[2615]: E0813 00:08:18.529461 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.529493 kubelet[2615]: W0813 00:08:18.529482 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.529493 kubelet[2615]: E0813 00:08:18.529495 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.530335 kubelet[2615]: E0813 00:08:18.530311 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.530335 kubelet[2615]: W0813 00:08:18.530334 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.530437 kubelet[2615]: E0813 00:08:18.530347 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.531542 kubelet[2615]: E0813 00:08:18.531510 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.531542 kubelet[2615]: W0813 00:08:18.531533 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.531620 kubelet[2615]: E0813 00:08:18.531547 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.531891 kubelet[2615]: E0813 00:08:18.531871 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.531891 kubelet[2615]: W0813 00:08:18.531888 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.531993 kubelet[2615]: E0813 00:08:18.531900 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.532103 kubelet[2615]: E0813 00:08:18.532088 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.532103 kubelet[2615]: W0813 00:08:18.532101 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.532173 kubelet[2615]: E0813 00:08:18.532110 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.532334 kubelet[2615]: E0813 00:08:18.532314 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.532334 kubelet[2615]: W0813 00:08:18.532328 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.532412 kubelet[2615]: E0813 00:08:18.532337 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.533278 kubelet[2615]: E0813 00:08:18.533253 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.533323 kubelet[2615]: W0813 00:08:18.533282 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.533323 kubelet[2615]: E0813 00:08:18.533294 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.533519 kubelet[2615]: E0813 00:08:18.533503 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.533550 kubelet[2615]: W0813 00:08:18.533520 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.533550 kubelet[2615]: E0813 00:08:18.533530 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.533766 kubelet[2615]: E0813 00:08:18.533751 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.533766 kubelet[2615]: W0813 00:08:18.533766 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.533817 kubelet[2615]: E0813 00:08:18.533775 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.534060 kubelet[2615]: E0813 00:08:18.534046 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.534095 kubelet[2615]: W0813 00:08:18.534061 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.534095 kubelet[2615]: E0813 00:08:18.534080 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.534271 kubelet[2615]: E0813 00:08:18.534247 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.534271 kubelet[2615]: W0813 00:08:18.534270 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.534324 kubelet[2615]: E0813 00:08:18.534281 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.536857 kubelet[2615]: E0813 00:08:18.536832 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.536857 kubelet[2615]: W0813 00:08:18.536851 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.536966 kubelet[2615]: E0813 00:08:18.536904 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.537180 kubelet[2615]: E0813 00:08:18.537165 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.537180 kubelet[2615]: W0813 00:08:18.537177 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.537238 kubelet[2615]: E0813 00:08:18.537192 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.537433 kubelet[2615]: E0813 00:08:18.537419 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.537433 kubelet[2615]: W0813 00:08:18.537432 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.537492 kubelet[2615]: E0813 00:08:18.537445 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.537675 kubelet[2615]: E0813 00:08:18.537661 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.537675 kubelet[2615]: W0813 00:08:18.537674 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.537738 kubelet[2615]: E0813 00:08:18.537688 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.537846 kubelet[2615]: E0813 00:08:18.537834 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.537846 kubelet[2615]: W0813 00:08:18.537846 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.537902 kubelet[2615]: E0813 00:08:18.537858 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.538007 kubelet[2615]: E0813 00:08:18.537990 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.538007 kubelet[2615]: W0813 00:08:18.538001 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.538058 kubelet[2615]: E0813 00:08:18.538010 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.538297 kubelet[2615]: E0813 00:08:18.538283 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.538297 kubelet[2615]: W0813 00:08:18.538296 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.538367 kubelet[2615]: E0813 00:08:18.538342 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.538773 kubelet[2615]: E0813 00:08:18.538758 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.538773 kubelet[2615]: W0813 00:08:18.538772 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.538887 kubelet[2615]: E0813 00:08:18.538874 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.538969 kubelet[2615]: E0813 00:08:18.538958 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.539000 kubelet[2615]: W0813 00:08:18.538969 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.539093 kubelet[2615]: E0813 00:08:18.539012 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.539139 kubelet[2615]: E0813 00:08:18.539131 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.539166 kubelet[2615]: W0813 00:08:18.539139 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.539166 kubelet[2615]: E0813 00:08:18.539152 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.539358 kubelet[2615]: E0813 00:08:18.539346 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.539358 kubelet[2615]: W0813 00:08:18.539357 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.539428 kubelet[2615]: E0813 00:08:18.539371 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.539601 kubelet[2615]: E0813 00:08:18.539586 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.539633 kubelet[2615]: W0813 00:08:18.539602 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.539633 kubelet[2615]: E0813 00:08:18.539620 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.539785 kubelet[2615]: E0813 00:08:18.539775 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.539785 kubelet[2615]: W0813 00:08:18.539785 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.539840 kubelet[2615]: E0813 00:08:18.539798 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.539992 kubelet[2615]: E0813 00:08:18.539968 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.540033 kubelet[2615]: W0813 00:08:18.539992 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.540033 kubelet[2615]: E0813 00:08:18.540006 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.540329 kubelet[2615]: E0813 00:08:18.540313 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.540329 kubelet[2615]: W0813 00:08:18.540329 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.540403 kubelet[2615]: E0813 00:08:18.540344 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.540813 kubelet[2615]: E0813 00:08:18.540798 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.540813 kubelet[2615]: W0813 00:08:18.540813 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.540873 kubelet[2615]: E0813 00:08:18.540834 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.541388 kubelet[2615]: E0813 00:08:18.541360 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.541388 kubelet[2615]: W0813 00:08:18.541378 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.541388 kubelet[2615]: E0813 00:08:18.541390 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:18.542271 kubelet[2615]: E0813 00:08:18.541574 2615 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 00:08:18.542271 kubelet[2615]: W0813 00:08:18.541587 2615 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 00:08:18.542271 kubelet[2615]: E0813 00:08:18.541595 2615 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 00:08:19.083674 containerd[1541]: time="2025-08-13T00:08:19.083603988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:19.086036 containerd[1541]: time="2025-08-13T00:08:19.085984351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 13 00:08:19.086871 containerd[1541]: time="2025-08-13T00:08:19.086837026Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:19.089457 containerd[1541]: time="2025-08-13T00:08:19.089408015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:19.090403 containerd[1541]: time="2025-08-13T00:08:19.090206643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.104804031s" Aug 13 00:08:19.090403 containerd[1541]: time="2025-08-13T00:08:19.090248769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 13 00:08:19.108814 containerd[1541]: time="2025-08-13T00:08:19.108754999Z" level=info msg="CreateContainer within sandbox \"5dd43260499eff6a45a564be591e925f80c262a811e103e5358ca3232f84df47\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 00:08:19.122490 containerd[1541]: time="2025-08-13T00:08:19.122437495Z" level=info msg="CreateContainer within sandbox \"5dd43260499eff6a45a564be591e925f80c262a811e103e5358ca3232f84df47\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"58ef6eff0e499fe0c0a0b2537f63b06e7f65bd55258de5d2712b784834cd9f1a\"" Aug 13 00:08:19.123322 containerd[1541]: time="2025-08-13T00:08:19.123155872Z" level=info msg="StartContainer for \"58ef6eff0e499fe0c0a0b2537f63b06e7f65bd55258de5d2712b784834cd9f1a\"" Aug 13 00:08:19.188017 containerd[1541]: time="2025-08-13T00:08:19.187891813Z" level=info msg="StartContainer for \"58ef6eff0e499fe0c0a0b2537f63b06e7f65bd55258de5d2712b784834cd9f1a\" returns successfully" Aug 13 00:08:19.246816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58ef6eff0e499fe0c0a0b2537f63b06e7f65bd55258de5d2712b784834cd9f1a-rootfs.mount: Deactivated successfully. Aug 13 00:08:19.265213 containerd[1541]: time="2025-08-13T00:08:19.261426267Z" level=info msg="shim disconnected" id=58ef6eff0e499fe0c0a0b2537f63b06e7f65bd55258de5d2712b784834cd9f1a namespace=k8s.io Aug 13 00:08:19.265213 containerd[1541]: time="2025-08-13T00:08:19.265114247Z" level=warning msg="cleaning up after shim disconnected" id=58ef6eff0e499fe0c0a0b2537f63b06e7f65bd55258de5d2712b784834cd9f1a namespace=k8s.io Aug 13 00:08:19.265213 containerd[1541]: time="2025-08-13T00:08:19.265141051Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:08:19.480215 kubelet[2615]: I0813 00:08:19.479732 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:08:19.480215 kubelet[2615]: E0813 00:08:19.480052 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:19.483197 containerd[1541]: time="2025-08-13T00:08:19.483145261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 00:08:20.390739 kubelet[2615]: E0813 00:08:20.390681 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wt4m" podUID="edf56ce2-0695-4a38-a297-9fcd045b8bd5" Aug 13 00:08:21.609638 containerd[1541]: time="2025-08-13T00:08:21.609588748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:21.610697 containerd[1541]: time="2025-08-13T00:08:21.610665562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 13 00:08:21.611527 containerd[1541]: time="2025-08-13T00:08:21.611501066Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:21.615090 containerd[1541]: time="2025-08-13T00:08:21.615042306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:21.616036 containerd[1541]: time="2025-08-13T00:08:21.615997785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.132801438s" Aug 13 00:08:21.616036 containerd[1541]: time="2025-08-13T00:08:21.616034430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 13 00:08:21.619550 containerd[1541]: time="2025-08-13T00:08:21.619517103Z" level=info msg="CreateContainer within sandbox \"5dd43260499eff6a45a564be591e925f80c262a811e103e5358ca3232f84df47\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 00:08:21.638346 containerd[1541]: time="2025-08-13T00:08:21.638292997Z" level=info msg="CreateContainer within sandbox \"5dd43260499eff6a45a564be591e925f80c262a811e103e5358ca3232f84df47\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"71461c21f8216b5a0137e26a84b085a69455449259de0107e0db952f8d05e73c\"" Aug 13 00:08:21.639055 containerd[1541]: time="2025-08-13T00:08:21.639030329Z" level=info msg="StartContainer for \"71461c21f8216b5a0137e26a84b085a69455449259de0107e0db952f8d05e73c\"" Aug 13 00:08:21.693995 containerd[1541]: time="2025-08-13T00:08:21.692590149Z" level=info msg="StartContainer for \"71461c21f8216b5a0137e26a84b085a69455449259de0107e0db952f8d05e73c\" returns successfully" Aug 13 00:08:22.378883 containerd[1541]: time="2025-08-13T00:08:22.378838146Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 00:08:22.392051 kubelet[2615]: E0813 00:08:22.390960 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5wt4m" podUID="edf56ce2-0695-4a38-a297-9fcd045b8bd5" Aug 13 00:08:22.411000 kubelet[2615]: I0813 00:08:22.410465 2615 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 00:08:22.419676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71461c21f8216b5a0137e26a84b085a69455449259de0107e0db952f8d05e73c-rootfs.mount: Deactivated successfully. Aug 13 00:08:22.428174 containerd[1541]: time="2025-08-13T00:08:22.427990167Z" level=info msg="shim disconnected" id=71461c21f8216b5a0137e26a84b085a69455449259de0107e0db952f8d05e73c namespace=k8s.io Aug 13 00:08:22.428174 containerd[1541]: time="2025-08-13T00:08:22.428168468Z" level=warning msg="cleaning up after shim disconnected" id=71461c21f8216b5a0137e26a84b085a69455449259de0107e0db952f8d05e73c namespace=k8s.io Aug 13 00:08:22.428174 containerd[1541]: time="2025-08-13T00:08:22.428180069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 00:08:22.498107 containerd[1541]: time="2025-08-13T00:08:22.497887380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 00:08:22.566006 kubelet[2615]: I0813 00:08:22.565962 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d417f09c-4d26-45a8-bedf-3d32ed52c91e-config-volume\") pod \"coredns-7c65d6cfc9-8n2ps\" (UID: \"d417f09c-4d26-45a8-bedf-3d32ed52c91e\") " pod="kube-system/coredns-7c65d6cfc9-8n2ps" Aug 13 00:08:22.566006 kubelet[2615]: I0813 00:08:22.566008 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llvjf\" (UniqueName: \"kubernetes.io/projected/d417f09c-4d26-45a8-bedf-3d32ed52c91e-kube-api-access-llvjf\") pod \"coredns-7c65d6cfc9-8n2ps\" (UID: \"d417f09c-4d26-45a8-bedf-3d32ed52c91e\") " pod="kube-system/coredns-7c65d6cfc9-8n2ps" Aug 13 00:08:22.566377 kubelet[2615]: I0813 00:08:22.566028 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/67aae47b-9ca7-424c-9205-116dd9244930-calico-apiserver-certs\") pod \"calico-apiserver-568ff5db89-898g4\" (UID: \"67aae47b-9ca7-424c-9205-116dd9244930\") " pod="calico-apiserver/calico-apiserver-568ff5db89-898g4" Aug 13 00:08:22.566377 kubelet[2615]: I0813 00:08:22.566046 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq49r\" (UniqueName: \"kubernetes.io/projected/97db4313-54d9-4d45-8d85-4c3787c17fbe-kube-api-access-bq49r\") pod \"whisker-c5fd5c744-nscz9\" (UID: \"97db4313-54d9-4d45-8d85-4c3787c17fbe\") " pod="calico-system/whisker-c5fd5c744-nscz9" Aug 13 00:08:22.566377 kubelet[2615]: I0813 00:08:22.566062 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29kws\" (UniqueName: \"kubernetes.io/projected/f323d6ae-fc54-4d5d-b0e3-3e41312708c1-kube-api-access-29kws\") pod \"coredns-7c65d6cfc9-qhll4\" (UID: \"f323d6ae-fc54-4d5d-b0e3-3e41312708c1\") " pod="kube-system/coredns-7c65d6cfc9-qhll4" Aug 13 00:08:22.566377 kubelet[2615]: I0813 00:08:22.566106 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6lt6\" (UniqueName: \"kubernetes.io/projected/7e176c4c-6fda-4c82-bb44-ae8b69c41d34-kube-api-access-l6lt6\") pod \"calico-apiserver-568ff5db89-pgwqw\" (UID: \"7e176c4c-6fda-4c82-bb44-ae8b69c41d34\") " pod="calico-apiserver/calico-apiserver-568ff5db89-pgwqw" Aug 13 00:08:22.566377 kubelet[2615]: I0813 00:08:22.566125 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a308d1f0-5106-4066-87c2-bd682359b04c-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-ntl8c\" (UID: \"a308d1f0-5106-4066-87c2-bd682359b04c\") " pod="calico-system/goldmane-58fd7646b9-ntl8c" Aug 13 00:08:22.566571 kubelet[2615]: I0813 00:08:22.566141 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8xq\" (UniqueName: \"kubernetes.io/projected/67aae47b-9ca7-424c-9205-116dd9244930-kube-api-access-6c8xq\") pod \"calico-apiserver-568ff5db89-898g4\" (UID: \"67aae47b-9ca7-424c-9205-116dd9244930\") " pod="calico-apiserver/calico-apiserver-568ff5db89-898g4" Aug 13 00:08:22.566571 kubelet[2615]: I0813 00:08:22.566160 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97db4313-54d9-4d45-8d85-4c3787c17fbe-whisker-ca-bundle\") pod \"whisker-c5fd5c744-nscz9\" (UID: \"97db4313-54d9-4d45-8d85-4c3787c17fbe\") " pod="calico-system/whisker-c5fd5c744-nscz9" Aug 13 00:08:22.566571 kubelet[2615]: I0813 00:08:22.566176 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f323d6ae-fc54-4d5d-b0e3-3e41312708c1-config-volume\") pod \"coredns-7c65d6cfc9-qhll4\" (UID: \"f323d6ae-fc54-4d5d-b0e3-3e41312708c1\") " pod="kube-system/coredns-7c65d6cfc9-qhll4" Aug 13 00:08:22.566571 kubelet[2615]: I0813 00:08:22.566190 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a308d1f0-5106-4066-87c2-bd682359b04c-goldmane-key-pair\") pod \"goldmane-58fd7646b9-ntl8c\" (UID: \"a308d1f0-5106-4066-87c2-bd682359b04c\") " pod="calico-system/goldmane-58fd7646b9-ntl8c" Aug 13 00:08:22.566571 kubelet[2615]: I0813 00:08:22.566216 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e176c4c-6fda-4c82-bb44-ae8b69c41d34-calico-apiserver-certs\") pod \"calico-apiserver-568ff5db89-pgwqw\" (UID: \"7e176c4c-6fda-4c82-bb44-ae8b69c41d34\") " pod="calico-apiserver/calico-apiserver-568ff5db89-pgwqw" Aug 13 00:08:22.566697 kubelet[2615]: I0813 00:08:22.566231 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a308d1f0-5106-4066-87c2-bd682359b04c-config\") pod \"goldmane-58fd7646b9-ntl8c\" (UID: \"a308d1f0-5106-4066-87c2-bd682359b04c\") " pod="calico-system/goldmane-58fd7646b9-ntl8c" Aug 13 00:08:22.566697 kubelet[2615]: I0813 00:08:22.566270 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bprc\" (UniqueName: \"kubernetes.io/projected/8acdfa87-3615-4f51-932b-63fd53529270-kube-api-access-4bprc\") pod \"calico-kube-controllers-6656487d5c-69t8m\" (UID: \"8acdfa87-3615-4f51-932b-63fd53529270\") " pod="calico-system/calico-kube-controllers-6656487d5c-69t8m" Aug 13 00:08:22.566697 kubelet[2615]: I0813 00:08:22.566287 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cksnc\" (UniqueName: \"kubernetes.io/projected/a308d1f0-5106-4066-87c2-bd682359b04c-kube-api-access-cksnc\") pod \"goldmane-58fd7646b9-ntl8c\" (UID: \"a308d1f0-5106-4066-87c2-bd682359b04c\") " pod="calico-system/goldmane-58fd7646b9-ntl8c" Aug 13 00:08:22.566697 kubelet[2615]: I0813 00:08:22.566307 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8acdfa87-3615-4f51-932b-63fd53529270-tigera-ca-bundle\") pod \"calico-kube-controllers-6656487d5c-69t8m\" (UID: \"8acdfa87-3615-4f51-932b-63fd53529270\") " pod="calico-system/calico-kube-controllers-6656487d5c-69t8m" Aug 13 00:08:22.566697 kubelet[2615]: I0813 00:08:22.566336 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97db4313-54d9-4d45-8d85-4c3787c17fbe-whisker-backend-key-pair\") pod \"whisker-c5fd5c744-nscz9\" (UID: \"97db4313-54d9-4d45-8d85-4c3787c17fbe\") " pod="calico-system/whisker-c5fd5c744-nscz9" Aug 13 00:08:22.759358 kubelet[2615]: E0813 00:08:22.758370 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:22.760266 containerd[1541]: time="2025-08-13T00:08:22.760120725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8n2ps,Uid:d417f09c-4d26-45a8-bedf-3d32ed52c91e,Namespace:kube-system,Attempt:0,}" Aug 13 00:08:22.762611 kubelet[2615]: E0813 00:08:22.762356 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:22.762974 containerd[1541]: time="2025-08-13T00:08:22.762894895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qhll4,Uid:f323d6ae-fc54-4d5d-b0e3-3e41312708c1,Namespace:kube-system,Attempt:0,}" Aug 13 00:08:22.784536 containerd[1541]: time="2025-08-13T00:08:22.784158630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6656487d5c-69t8m,Uid:8acdfa87-3615-4f51-932b-63fd53529270,Namespace:calico-system,Attempt:0,}" Aug 13 00:08:22.785545 containerd[1541]: time="2025-08-13T00:08:22.785349372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568ff5db89-pgwqw,Uid:7e176c4c-6fda-4c82-bb44-ae8b69c41d34,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:08:22.786413 containerd[1541]: time="2025-08-13T00:08:22.786381335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ntl8c,Uid:a308d1f0-5106-4066-87c2-bd682359b04c,Namespace:calico-system,Attempt:0,}" Aug 13 00:08:22.792571 containerd[1541]: time="2025-08-13T00:08:22.792126260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568ff5db89-898g4,Uid:67aae47b-9ca7-424c-9205-116dd9244930,Namespace:calico-apiserver,Attempt:0,}" Aug 13 00:08:22.805495 containerd[1541]: time="2025-08-13T00:08:22.805448089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5fd5c744-nscz9,Uid:97db4313-54d9-4d45-8d85-4c3787c17fbe,Namespace:calico-system,Attempt:0,}" Aug 13 00:08:23.228485 containerd[1541]: time="2025-08-13T00:08:23.228338736Z" level=error msg="Failed to destroy network for sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.229057 containerd[1541]: time="2025-08-13T00:08:23.228966968Z" level=error msg="encountered an error cleaning up failed sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.229057 containerd[1541]: time="2025-08-13T00:08:23.229027055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5fd5c744-nscz9,Uid:97db4313-54d9-4d45-8d85-4c3787c17fbe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.230264 kubelet[2615]: E0813 00:08:23.230042 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.232957 kubelet[2615]: E0813 00:08:23.232900 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c5fd5c744-nscz9" Aug 13 00:08:23.233861 kubelet[2615]: E0813 00:08:23.233814 2615 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c5fd5c744-nscz9" Aug 13 00:08:23.234038 kubelet[2615]: E0813 00:08:23.234007 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c5fd5c744-nscz9_calico-system(97db4313-54d9-4d45-8d85-4c3787c17fbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c5fd5c744-nscz9_calico-system(97db4313-54d9-4d45-8d85-4c3787c17fbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c5fd5c744-nscz9" podUID="97db4313-54d9-4d45-8d85-4c3787c17fbe" Aug 13 00:08:23.236459 containerd[1541]: time="2025-08-13T00:08:23.236327210Z" level=error msg="Failed to destroy network for sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.238441 containerd[1541]: time="2025-08-13T00:08:23.238389446Z" level=error msg="encountered an error cleaning up failed sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.238631 containerd[1541]: time="2025-08-13T00:08:23.238607231Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qhll4,Uid:f323d6ae-fc54-4d5d-b0e3-3e41312708c1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.239120 kubelet[2615]: E0813 00:08:23.238912 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.239120 kubelet[2615]: E0813 00:08:23.238980 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qhll4" Aug 13 00:08:23.239120 kubelet[2615]: E0813 00:08:23.239000 2615 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-qhll4" Aug 13 00:08:23.239257 kubelet[2615]: E0813 00:08:23.239047 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-qhll4_kube-system(f323d6ae-fc54-4d5d-b0e3-3e41312708c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-qhll4_kube-system(f323d6ae-fc54-4d5d-b0e3-3e41312708c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qhll4" podUID="f323d6ae-fc54-4d5d-b0e3-3e41312708c1" Aug 13 00:08:23.251600 containerd[1541]: time="2025-08-13T00:08:23.251539191Z" level=error msg="Failed to destroy network for sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.253094 containerd[1541]: time="2025-08-13T00:08:23.253027921Z" level=error msg="Failed to destroy network for sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.253218 containerd[1541]: time="2025-08-13T00:08:23.253030121Z" level=error msg="Failed to destroy network for sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.254159 containerd[1541]: time="2025-08-13T00:08:23.254057359Z" level=error msg="encountered an error cleaning up failed sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.254215 containerd[1541]: time="2025-08-13T00:08:23.254194214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568ff5db89-898g4,Uid:67aae47b-9ca7-424c-9205-116dd9244930,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.255025 kubelet[2615]: E0813 00:08:23.254512 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.255025 kubelet[2615]: E0813 00:08:23.254574 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568ff5db89-898g4" Aug 13 00:08:23.255025 kubelet[2615]: E0813 00:08:23.254594 2615 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568ff5db89-898g4" Aug 13 00:08:23.255188 containerd[1541]: time="2025-08-13T00:08:23.254530973Z" level=error msg="encountered an error cleaning up failed sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.255188 containerd[1541]: time="2025-08-13T00:08:23.254567897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6656487d5c-69t8m,Uid:8acdfa87-3615-4f51-932b-63fd53529270,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.255273 kubelet[2615]: E0813 00:08:23.254637 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568ff5db89-898g4_calico-apiserver(67aae47b-9ca7-424c-9205-116dd9244930)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568ff5db89-898g4_calico-apiserver(67aae47b-9ca7-424c-9205-116dd9244930)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568ff5db89-898g4" podUID="67aae47b-9ca7-424c-9205-116dd9244930" Aug 13 00:08:23.255273 kubelet[2615]: E0813 00:08:23.254914 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.255273 kubelet[2615]: E0813 00:08:23.254941 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6656487d5c-69t8m" Aug 13 00:08:23.255357 kubelet[2615]: E0813 00:08:23.254956 2615 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6656487d5c-69t8m" Aug 13 00:08:23.255357 kubelet[2615]: E0813 00:08:23.254986 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6656487d5c-69t8m_calico-system(8acdfa87-3615-4f51-932b-63fd53529270)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6656487d5c-69t8m_calico-system(8acdfa87-3615-4f51-932b-63fd53529270)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6656487d5c-69t8m" podUID="8acdfa87-3615-4f51-932b-63fd53529270" Aug 13 00:08:23.255590 containerd[1541]: time="2025-08-13T00:08:23.255540609Z" level=error msg="Failed to destroy network for sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.256345 containerd[1541]: time="2025-08-13T00:08:23.256195844Z" level=error msg="encountered an error cleaning up failed sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.256747 containerd[1541]: time="2025-08-13T00:08:23.256361823Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568ff5db89-pgwqw,Uid:7e176c4c-6fda-4c82-bb44-ae8b69c41d34,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.256818 kubelet[2615]: E0813 00:08:23.256571 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.256818 kubelet[2615]: E0813 00:08:23.256620 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568ff5db89-pgwqw" Aug 13 00:08:23.256818 kubelet[2615]: E0813 00:08:23.256638 2615 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-568ff5db89-pgwqw" Aug 13 00:08:23.256914 kubelet[2615]: E0813 00:08:23.256667 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-568ff5db89-pgwqw_calico-apiserver(7e176c4c-6fda-4c82-bb44-ae8b69c41d34)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-568ff5db89-pgwqw_calico-apiserver(7e176c4c-6fda-4c82-bb44-ae8b69c41d34)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568ff5db89-pgwqw" podUID="7e176c4c-6fda-4c82-bb44-ae8b69c41d34" Aug 13 00:08:23.264512 containerd[1541]: time="2025-08-13T00:08:23.264437867Z" level=error msg="encountered an error cleaning up failed sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.264641 containerd[1541]: time="2025-08-13T00:08:23.264546519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8n2ps,Uid:d417f09c-4d26-45a8-bedf-3d32ed52c91e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.264990 kubelet[2615]: E0813 00:08:23.264769 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.264990 kubelet[2615]: E0813 00:08:23.264820 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8n2ps" Aug 13 00:08:23.264990 kubelet[2615]: E0813 00:08:23.264837 2615 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8n2ps" Aug 13 00:08:23.265106 kubelet[2615]: E0813 00:08:23.264886 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8n2ps_kube-system(d417f09c-4d26-45a8-bedf-3d32ed52c91e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8n2ps_kube-system(d417f09c-4d26-45a8-bedf-3d32ed52c91e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8n2ps" podUID="d417f09c-4d26-45a8-bedf-3d32ed52c91e" Aug 13 00:08:23.266611 containerd[1541]: time="2025-08-13T00:08:23.266205269Z" level=error msg="Failed to destroy network for sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.267187 containerd[1541]: time="2025-08-13T00:08:23.267011001Z" level=error msg="encountered an error cleaning up failed sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.267187 containerd[1541]: time="2025-08-13T00:08:23.267067848Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ntl8c,Uid:a308d1f0-5106-4066-87c2-bd682359b04c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.267613 kubelet[2615]: E0813 00:08:23.267445 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.267613 kubelet[2615]: E0813 00:08:23.267502 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ntl8c" Aug 13 00:08:23.267613 kubelet[2615]: E0813 00:08:23.267521 2615 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-ntl8c" Aug 13 00:08:23.267726 kubelet[2615]: E0813 00:08:23.267570 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-ntl8c_calico-system(a308d1f0-5106-4066-87c2-bd682359b04c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-ntl8c_calico-system(a308d1f0-5106-4066-87c2-bd682359b04c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ntl8c" podUID="a308d1f0-5106-4066-87c2-bd682359b04c" Aug 13 00:08:23.499690 kubelet[2615]: I0813 00:08:23.499388 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:23.501249 containerd[1541]: time="2025-08-13T00:08:23.500417788Z" level=info msg="StopPodSandbox for \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\"" Aug 13 00:08:23.501249 containerd[1541]: time="2025-08-13T00:08:23.500602249Z" level=info msg="Ensure that sandbox 7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539 in task-service has been cleanup successfully" Aug 13 00:08:23.502394 kubelet[2615]: I0813 00:08:23.502363 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:23.503042 containerd[1541]: time="2025-08-13T00:08:23.502992402Z" level=info msg="StopPodSandbox for \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\"" Aug 13 00:08:23.503403 containerd[1541]: time="2025-08-13T00:08:23.503364005Z" level=info msg="Ensure that sandbox e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d in task-service has been cleanup successfully" Aug 13 00:08:23.506104 kubelet[2615]: I0813 00:08:23.505957 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:23.506634 containerd[1541]: time="2025-08-13T00:08:23.506594015Z" level=info msg="StopPodSandbox for \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\"" Aug 13 00:08:23.506801 containerd[1541]: time="2025-08-13T00:08:23.506778196Z" level=info msg="Ensure that sandbox 1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe in task-service has been cleanup successfully" Aug 13 00:08:23.508822 kubelet[2615]: I0813 00:08:23.508647 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:23.511965 containerd[1541]: time="2025-08-13T00:08:23.511435969Z" level=info msg="StopPodSandbox for \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\"" Aug 13 00:08:23.512371 containerd[1541]: time="2025-08-13T00:08:23.512332551Z" level=info msg="Ensure that sandbox 8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85 in task-service has been cleanup successfully" Aug 13 00:08:23.514475 kubelet[2615]: I0813 00:08:23.514361 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:23.517546 containerd[1541]: time="2025-08-13T00:08:23.517372448Z" level=info msg="StopPodSandbox for \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\"" Aug 13 00:08:23.517705 containerd[1541]: time="2025-08-13T00:08:23.517675803Z" level=info msg="Ensure that sandbox d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a in task-service has been cleanup successfully" Aug 13 00:08:23.520246 kubelet[2615]: I0813 00:08:23.520211 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:23.523467 containerd[1541]: time="2025-08-13T00:08:23.522311853Z" level=info msg="StopPodSandbox for \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\"" Aug 13 00:08:23.523590 kubelet[2615]: I0813 00:08:23.522939 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:23.523657 containerd[1541]: time="2025-08-13T00:08:23.523613122Z" level=info msg="StopPodSandbox for \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\"" Aug 13 00:08:23.523896 containerd[1541]: time="2025-08-13T00:08:23.523842788Z" level=info msg="Ensure that sandbox a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32 in task-service has been cleanup successfully" Aug 13 00:08:23.526130 containerd[1541]: time="2025-08-13T00:08:23.524212991Z" level=info msg="Ensure that sandbox 78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea in task-service has been cleanup successfully" Aug 13 00:08:23.585245 containerd[1541]: time="2025-08-13T00:08:23.585195808Z" level=error msg="StopPodSandbox for \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\" failed" error="failed to destroy network for sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.585828 kubelet[2615]: E0813 00:08:23.585657 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:23.585828 kubelet[2615]: E0813 00:08:23.585721 2615 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe"} Aug 13 00:08:23.585828 kubelet[2615]: E0813 00:08:23.585775 2615 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67aae47b-9ca7-424c-9205-116dd9244930\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:08:23.585828 kubelet[2615]: E0813 00:08:23.585799 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67aae47b-9ca7-424c-9205-116dd9244930\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568ff5db89-898g4" podUID="67aae47b-9ca7-424c-9205-116dd9244930" Aug 13 00:08:23.586065 containerd[1541]: time="2025-08-13T00:08:23.585882247Z" level=error msg="StopPodSandbox for \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\" failed" error="failed to destroy network for sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.588598 kubelet[2615]: E0813 00:08:23.586266 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:23.588598 kubelet[2615]: E0813 00:08:23.586311 2615 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d"} Aug 13 00:08:23.588598 kubelet[2615]: E0813 00:08:23.586336 2615 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f323d6ae-fc54-4d5d-b0e3-3e41312708c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:08:23.588598 kubelet[2615]: E0813 00:08:23.586354 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f323d6ae-fc54-4d5d-b0e3-3e41312708c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-qhll4" podUID="f323d6ae-fc54-4d5d-b0e3-3e41312708c1" Aug 13 00:08:23.588830 containerd[1541]: time="2025-08-13T00:08:23.586490957Z" level=error msg="StopPodSandbox for \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\" failed" error="failed to destroy network for sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.588830 containerd[1541]: time="2025-08-13T00:08:23.587042620Z" level=error msg="StopPodSandbox for \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\" failed" error="failed to destroy network for sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.589096 kubelet[2615]: E0813 00:08:23.589029 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:23.589163 kubelet[2615]: E0813 00:08:23.589113 2615 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32"} Aug 13 00:08:23.589163 kubelet[2615]: E0813 00:08:23.589151 2615 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8acdfa87-3615-4f51-932b-63fd53529270\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:08:23.589231 kubelet[2615]: E0813 00:08:23.589029 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:23.589231 kubelet[2615]: E0813 00:08:23.589174 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8acdfa87-3615-4f51-932b-63fd53529270\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6656487d5c-69t8m" podUID="8acdfa87-3615-4f51-932b-63fd53529270" Aug 13 00:08:23.589231 kubelet[2615]: E0813 00:08:23.589196 2615 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539"} Aug 13 00:08:23.589231 kubelet[2615]: E0813 00:08:23.589228 2615 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a308d1f0-5106-4066-87c2-bd682359b04c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:08:23.589348 kubelet[2615]: E0813 00:08:23.589247 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a308d1f0-5106-4066-87c2-bd682359b04c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-ntl8c" podUID="a308d1f0-5106-4066-87c2-bd682359b04c" Aug 13 00:08:23.591149 containerd[1541]: time="2025-08-13T00:08:23.591096964Z" level=error msg="StopPodSandbox for \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\" failed" error="failed to destroy network for sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.591488 kubelet[2615]: E0813 00:08:23.591444 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:23.591561 kubelet[2615]: E0813 00:08:23.591501 2615 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85"} Aug 13 00:08:23.591561 kubelet[2615]: E0813 00:08:23.591534 2615 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7e176c4c-6fda-4c82-bb44-ae8b69c41d34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:08:23.591631 kubelet[2615]: E0813 00:08:23.591555 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7e176c4c-6fda-4c82-bb44-ae8b69c41d34\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-568ff5db89-pgwqw" podUID="7e176c4c-6fda-4c82-bb44-ae8b69c41d34" Aug 13 00:08:23.591846 containerd[1541]: time="2025-08-13T00:08:23.591802364Z" level=error msg="StopPodSandbox for \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\" failed" error="failed to destroy network for sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.592023 kubelet[2615]: E0813 00:08:23.591985 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:23.592101 kubelet[2615]: E0813 00:08:23.592028 2615 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a"} Aug 13 00:08:23.592101 kubelet[2615]: E0813 00:08:23.592057 2615 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"97db4313-54d9-4d45-8d85-4c3787c17fbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:08:23.592176 kubelet[2615]: E0813 00:08:23.592149 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"97db4313-54d9-4d45-8d85-4c3787c17fbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c5fd5c744-nscz9" podUID="97db4313-54d9-4d45-8d85-4c3787c17fbe" Aug 13 00:08:23.596577 containerd[1541]: time="2025-08-13T00:08:23.596516744Z" level=error msg="StopPodSandbox for \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\" failed" error="failed to destroy network for sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:23.596815 kubelet[2615]: E0813 00:08:23.596756 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:23.596861 kubelet[2615]: E0813 00:08:23.596810 2615 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea"} Aug 13 00:08:23.596861 kubelet[2615]: E0813 00:08:23.596840 2615 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d417f09c-4d26-45a8-bedf-3d32ed52c91e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:08:23.596945 kubelet[2615]: E0813 00:08:23.596860 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d417f09c-4d26-45a8-bedf-3d32ed52c91e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8n2ps" podUID="d417f09c-4d26-45a8-bedf-3d32ed52c91e" Aug 13 00:08:23.675435 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea-shm.mount: Deactivated successfully. Aug 13 00:08:24.395766 containerd[1541]: time="2025-08-13T00:08:24.395724492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5wt4m,Uid:edf56ce2-0695-4a38-a297-9fcd045b8bd5,Namespace:calico-system,Attempt:0,}" Aug 13 00:08:24.466311 containerd[1541]: time="2025-08-13T00:08:24.466182597Z" level=error msg="Failed to destroy network for sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:24.466641 containerd[1541]: time="2025-08-13T00:08:24.466537996Z" level=error msg="encountered an error cleaning up failed sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:24.466641 containerd[1541]: time="2025-08-13T00:08:24.466595362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5wt4m,Uid:edf56ce2-0695-4a38-a297-9fcd045b8bd5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:24.468848 kubelet[2615]: E0813 00:08:24.466954 2615 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:24.469041 kubelet[2615]: E0813 00:08:24.468880 2615 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5wt4m" Aug 13 00:08:24.469041 kubelet[2615]: E0813 00:08:24.468902 2615 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5wt4m" Aug 13 00:08:24.469041 kubelet[2615]: E0813 00:08:24.468943 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5wt4m_calico-system(edf56ce2-0695-4a38-a297-9fcd045b8bd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5wt4m_calico-system(edf56ce2-0695-4a38-a297-9fcd045b8bd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5wt4m" podUID="edf56ce2-0695-4a38-a297-9fcd045b8bd5" Aug 13 00:08:24.471349 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621-shm.mount: Deactivated successfully. Aug 13 00:08:24.526345 kubelet[2615]: I0813 00:08:24.526310 2615 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:24.527221 containerd[1541]: time="2025-08-13T00:08:24.527186382Z" level=info msg="StopPodSandbox for \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\"" Aug 13 00:08:24.527446 containerd[1541]: time="2025-08-13T00:08:24.527424969Z" level=info msg="Ensure that sandbox cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621 in task-service has been cleanup successfully" Aug 13 00:08:24.556327 containerd[1541]: time="2025-08-13T00:08:24.556217733Z" level=error msg="StopPodSandbox for \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\" failed" error="failed to destroy network for sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 00:08:24.556487 kubelet[2615]: E0813 00:08:24.556441 2615 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:24.556532 kubelet[2615]: E0813 00:08:24.556499 2615 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621"} Aug 13 00:08:24.556559 kubelet[2615]: E0813 00:08:24.556534 2615 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"edf56ce2-0695-4a38-a297-9fcd045b8bd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 00:08:24.556736 kubelet[2615]: E0813 00:08:24.556557 2615 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"edf56ce2-0695-4a38-a297-9fcd045b8bd5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5wt4m" podUID="edf56ce2-0695-4a38-a297-9fcd045b8bd5" Aug 13 00:08:25.984828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount40192707.mount: Deactivated successfully. Aug 13 00:08:26.185339 containerd[1541]: time="2025-08-13T00:08:26.185178543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:26.185900 containerd[1541]: time="2025-08-13T00:08:26.185704596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 13 00:08:26.186741 containerd[1541]: time="2025-08-13T00:08:26.186685536Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:26.197987 containerd[1541]: time="2025-08-13T00:08:26.197929800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:26.198879 containerd[1541]: time="2025-08-13T00:08:26.198846653Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.700918908s" Aug 13 00:08:26.198879 containerd[1541]: time="2025-08-13T00:08:26.198880697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 13 00:08:26.209349 containerd[1541]: time="2025-08-13T00:08:26.209050571Z" level=info msg="CreateContainer within sandbox \"5dd43260499eff6a45a564be591e925f80c262a811e103e5358ca3232f84df47\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 00:08:26.226720 containerd[1541]: time="2025-08-13T00:08:26.226660483Z" level=info msg="CreateContainer within sandbox \"5dd43260499eff6a45a564be591e925f80c262a811e103e5358ca3232f84df47\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7d71b67c65b32ec0670d6a4037d0239abfef1a0ebdef858ae85cec99218867a4\"" Aug 13 00:08:26.227384 containerd[1541]: time="2025-08-13T00:08:26.227265385Z" level=info msg="StartContainer for \"7d71b67c65b32ec0670d6a4037d0239abfef1a0ebdef858ae85cec99218867a4\"" Aug 13 00:08:26.311319 containerd[1541]: time="2025-08-13T00:08:26.311197724Z" level=info msg="StartContainer for \"7d71b67c65b32ec0670d6a4037d0239abfef1a0ebdef858ae85cec99218867a4\" returns successfully" Aug 13 00:08:26.768207 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 00:08:26.768386 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 00:08:26.896510 kubelet[2615]: I0813 00:08:26.895197 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-69zbs" podStartSLOduration=1.5720554039999999 podStartE2EDuration="10.89517874s" podCreationTimestamp="2025-08-13 00:08:16 +0000 UTC" firstStartedPulling="2025-08-13 00:08:16.876489395 +0000 UTC m=+20.576698680" lastFinishedPulling="2025-08-13 00:08:26.199612731 +0000 UTC m=+29.899822016" observedRunningTime="2025-08-13 00:08:26.581368932 +0000 UTC m=+30.281578217" watchObservedRunningTime="2025-08-13 00:08:26.89517874 +0000 UTC m=+30.595388025" Aug 13 00:08:26.925241 containerd[1541]: time="2025-08-13T00:08:26.924425875Z" level=info msg="StopPodSandbox for \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\"" Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.078 [INFO][3922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.078 [INFO][3922] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" iface="eth0" netns="/var/run/netns/cni-49df2d6b-ac24-ee75-a516-da1efed1550d" Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.079 [INFO][3922] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" iface="eth0" netns="/var/run/netns/cni-49df2d6b-ac24-ee75-a516-da1efed1550d" Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.080 [INFO][3922] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" iface="eth0" netns="/var/run/netns/cni-49df2d6b-ac24-ee75-a516-da1efed1550d" Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.080 [INFO][3922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.081 [INFO][3922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.191 [INFO][3932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" HandleID="k8s-pod-network.d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Workload="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.192 [INFO][3932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.192 [INFO][3932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.204 [WARNING][3932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" HandleID="k8s-pod-network.d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Workload="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.205 [INFO][3932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" HandleID="k8s-pod-network.d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Workload="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.206 [INFO][3932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:27.210936 containerd[1541]: 2025-08-13 00:08:27.208 [INFO][3922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:27.214334 containerd[1541]: time="2025-08-13T00:08:27.211117825Z" level=info msg="TearDown network for sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\" successfully" Aug 13 00:08:27.214334 containerd[1541]: time="2025-08-13T00:08:27.211147868Z" level=info msg="StopPodSandbox for \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\" returns successfully" Aug 13 00:08:27.215570 systemd[1]: run-netns-cni\x2d49df2d6b\x2dac24\x2dee75\x2da516\x2dda1efed1550d.mount: Deactivated successfully. Aug 13 00:08:27.314324 kubelet[2615]: I0813 00:08:27.314261 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97db4313-54d9-4d45-8d85-4c3787c17fbe-whisker-backend-key-pair\") pod \"97db4313-54d9-4d45-8d85-4c3787c17fbe\" (UID: \"97db4313-54d9-4d45-8d85-4c3787c17fbe\") " Aug 13 00:08:27.314324 kubelet[2615]: I0813 00:08:27.314319 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97db4313-54d9-4d45-8d85-4c3787c17fbe-whisker-ca-bundle\") pod \"97db4313-54d9-4d45-8d85-4c3787c17fbe\" (UID: \"97db4313-54d9-4d45-8d85-4c3787c17fbe\") " Aug 13 00:08:27.314616 kubelet[2615]: I0813 00:08:27.314345 2615 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq49r\" (UniqueName: \"kubernetes.io/projected/97db4313-54d9-4d45-8d85-4c3787c17fbe-kube-api-access-bq49r\") pod \"97db4313-54d9-4d45-8d85-4c3787c17fbe\" (UID: \"97db4313-54d9-4d45-8d85-4c3787c17fbe\") " Aug 13 00:08:27.315006 kubelet[2615]: I0813 00:08:27.314949 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97db4313-54d9-4d45-8d85-4c3787c17fbe-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "97db4313-54d9-4d45-8d85-4c3787c17fbe" (UID: "97db4313-54d9-4d45-8d85-4c3787c17fbe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 00:08:27.319178 kubelet[2615]: I0813 00:08:27.318716 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97db4313-54d9-4d45-8d85-4c3787c17fbe-kube-api-access-bq49r" (OuterVolumeSpecName: "kube-api-access-bq49r") pod "97db4313-54d9-4d45-8d85-4c3787c17fbe" (UID: "97db4313-54d9-4d45-8d85-4c3787c17fbe"). InnerVolumeSpecName "kube-api-access-bq49r". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 00:08:27.320198 systemd[1]: var-lib-kubelet-pods-97db4313\x2d54d9\x2d4d45\x2d8d85\x2d4c3787c17fbe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbq49r.mount: Deactivated successfully. Aug 13 00:08:27.328921 kubelet[2615]: I0813 00:08:27.328845 2615 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97db4313-54d9-4d45-8d85-4c3787c17fbe-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "97db4313-54d9-4d45-8d85-4c3787c17fbe" (UID: "97db4313-54d9-4d45-8d85-4c3787c17fbe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 00:08:27.330768 systemd[1]: var-lib-kubelet-pods-97db4313\x2d54d9\x2d4d45\x2d8d85\x2d4c3787c17fbe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 00:08:27.415403 kubelet[2615]: I0813 00:08:27.415361 2615 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq49r\" (UniqueName: \"kubernetes.io/projected/97db4313-54d9-4d45-8d85-4c3787c17fbe-kube-api-access-bq49r\") on node \"localhost\" DevicePath \"\"" Aug 13 00:08:27.415403 kubelet[2615]: I0813 00:08:27.415397 2615 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97db4313-54d9-4d45-8d85-4c3787c17fbe-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 00:08:27.415403 kubelet[2615]: I0813 00:08:27.415409 2615 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97db4313-54d9-4d45-8d85-4c3787c17fbe-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 00:08:27.717219 kubelet[2615]: I0813 00:08:27.717161 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4139daf2-031b-4326-9708-f7c4b1fda966-whisker-backend-key-pair\") pod \"whisker-7697bbd8d8-jklxr\" (UID: \"4139daf2-031b-4326-9708-f7c4b1fda966\") " pod="calico-system/whisker-7697bbd8d8-jklxr" Aug 13 00:08:27.717219 kubelet[2615]: I0813 00:08:27.717211 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xhqj\" (UniqueName: \"kubernetes.io/projected/4139daf2-031b-4326-9708-f7c4b1fda966-kube-api-access-6xhqj\") pod \"whisker-7697bbd8d8-jklxr\" (UID: \"4139daf2-031b-4326-9708-f7c4b1fda966\") " pod="calico-system/whisker-7697bbd8d8-jklxr" Aug 13 00:08:27.717422 kubelet[2615]: I0813 00:08:27.717254 2615 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4139daf2-031b-4326-9708-f7c4b1fda966-whisker-ca-bundle\") pod \"whisker-7697bbd8d8-jklxr\" (UID: \"4139daf2-031b-4326-9708-f7c4b1fda966\") " pod="calico-system/whisker-7697bbd8d8-jklxr" Aug 13 00:08:27.918893 containerd[1541]: time="2025-08-13T00:08:27.918836605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7697bbd8d8-jklxr,Uid:4139daf2-031b-4326-9708-f7c4b1fda966,Namespace:calico-system,Attempt:0,}" Aug 13 00:08:28.171780 systemd-networkd[1223]: calie2d63015102: Link UP Aug 13 00:08:28.172757 systemd-networkd[1223]: calie2d63015102: Gained carrier Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.080 [INFO][3977] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.093 [INFO][3977] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7697bbd8d8--jklxr-eth0 whisker-7697bbd8d8- calico-system 4139daf2-031b-4326-9708-f7c4b1fda966 886 0 2025-08-13 00:08:27 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7697bbd8d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7697bbd8d8-jklxr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie2d63015102 [] [] }} ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Namespace="calico-system" Pod="whisker-7697bbd8d8-jklxr" WorkloadEndpoint="localhost-k8s-whisker--7697bbd8d8--jklxr-" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.093 [INFO][3977] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Namespace="calico-system" Pod="whisker-7697bbd8d8-jklxr" WorkloadEndpoint="localhost-k8s-whisker--7697bbd8d8--jklxr-eth0" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.119 [INFO][3993] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" HandleID="k8s-pod-network.b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Workload="localhost-k8s-whisker--7697bbd8d8--jklxr-eth0" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.119 [INFO][3993] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" HandleID="k8s-pod-network.b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Workload="localhost-k8s-whisker--7697bbd8d8--jklxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a1180), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7697bbd8d8-jklxr", "timestamp":"2025-08-13 00:08:28.119801493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.120 [INFO][3993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.120 [INFO][3993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.120 [INFO][3993] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.130 [INFO][3993] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" host="localhost" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.138 [INFO][3993] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.143 [INFO][3993] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.145 [INFO][3993] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.148 [INFO][3993] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.148 [INFO][3993] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" host="localhost" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.151 [INFO][3993] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.156 [INFO][3993] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" host="localhost" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.161 [INFO][3993] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" host="localhost" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.161 [INFO][3993] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" host="localhost" Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.161 [INFO][3993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:28.191553 containerd[1541]: 2025-08-13 00:08:28.161 [INFO][3993] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" HandleID="k8s-pod-network.b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Workload="localhost-k8s-whisker--7697bbd8d8--jklxr-eth0" Aug 13 00:08:28.194040 containerd[1541]: 2025-08-13 00:08:28.163 [INFO][3977] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Namespace="calico-system" Pod="whisker-7697bbd8d8-jklxr" WorkloadEndpoint="localhost-k8s-whisker--7697bbd8d8--jklxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7697bbd8d8--jklxr-eth0", GenerateName:"whisker-7697bbd8d8-", Namespace:"calico-system", SelfLink:"", UID:"4139daf2-031b-4326-9708-f7c4b1fda966", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7697bbd8d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7697bbd8d8-jklxr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2d63015102", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:28.194040 containerd[1541]: 2025-08-13 00:08:28.164 [INFO][3977] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Namespace="calico-system" Pod="whisker-7697bbd8d8-jklxr" WorkloadEndpoint="localhost-k8s-whisker--7697bbd8d8--jklxr-eth0" Aug 13 00:08:28.194040 containerd[1541]: 2025-08-13 00:08:28.164 [INFO][3977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2d63015102 ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Namespace="calico-system" Pod="whisker-7697bbd8d8-jklxr" WorkloadEndpoint="localhost-k8s-whisker--7697bbd8d8--jklxr-eth0" Aug 13 00:08:28.194040 containerd[1541]: 2025-08-13 00:08:28.173 [INFO][3977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Namespace="calico-system" Pod="whisker-7697bbd8d8-jklxr" WorkloadEndpoint="localhost-k8s-whisker--7697bbd8d8--jklxr-eth0" Aug 13 00:08:28.194040 containerd[1541]: 2025-08-13 00:08:28.173 [INFO][3977] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Namespace="calico-system" Pod="whisker-7697bbd8d8-jklxr" WorkloadEndpoint="localhost-k8s-whisker--7697bbd8d8--jklxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7697bbd8d8--jklxr-eth0", GenerateName:"whisker-7697bbd8d8-", Namespace:"calico-system", SelfLink:"", UID:"4139daf2-031b-4326-9708-f7c4b1fda966", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7697bbd8d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d", Pod:"whisker-7697bbd8d8-jklxr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie2d63015102", MAC:"ea:60:7c:f9:df:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:28.194040 containerd[1541]: 2025-08-13 00:08:28.183 [INFO][3977] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d" Namespace="calico-system" Pod="whisker-7697bbd8d8-jklxr" WorkloadEndpoint="localhost-k8s-whisker--7697bbd8d8--jklxr-eth0" Aug 13 00:08:28.226105 containerd[1541]: time="2025-08-13T00:08:28.224667448Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:28.226105 containerd[1541]: time="2025-08-13T00:08:28.224754137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:28.226105 containerd[1541]: time="2025-08-13T00:08:28.224766778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:28.226105 containerd[1541]: time="2025-08-13T00:08:28.224898910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:28.321977 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:08:28.374401 containerd[1541]: time="2025-08-13T00:08:28.373363268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7697bbd8d8-jklxr,Uid:4139daf2-031b-4326-9708-f7c4b1fda966,Namespace:calico-system,Attempt:0,} returns sandbox id \"b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d\"" Aug 13 00:08:28.378143 containerd[1541]: time="2025-08-13T00:08:28.377867854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 00:08:28.395066 kubelet[2615]: I0813 00:08:28.394778 2615 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97db4313-54d9-4d45-8d85-4c3787c17fbe" path="/var/lib/kubelet/pods/97db4313-54d9-4d45-8d85-4c3787c17fbe/volumes" Aug 13 00:08:28.984487 systemd[1]: run-containerd-runc-k8s.io-b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d-runc.7izjBu.mount: Deactivated successfully. Aug 13 00:08:29.504524 containerd[1541]: time="2025-08-13T00:08:29.504459126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:29.505582 containerd[1541]: time="2025-08-13T00:08:29.505539024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 13 00:08:29.506688 containerd[1541]: time="2025-08-13T00:08:29.506638125Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:29.509528 containerd[1541]: time="2025-08-13T00:08:29.509256204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:29.510949 containerd[1541]: time="2025-08-13T00:08:29.510919715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.133009978s" Aug 13 00:08:29.511064 containerd[1541]: time="2025-08-13T00:08:29.511048087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 13 00:08:29.514557 containerd[1541]: time="2025-08-13T00:08:29.514514844Z" level=info msg="CreateContainer within sandbox \"b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 00:08:29.529412 containerd[1541]: time="2025-08-13T00:08:29.529286952Z" level=info msg="CreateContainer within sandbox \"b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8ef428fd33b140dd0668ab66af962629190b7aee4614b8b45f7fe44f02f460a1\"" Aug 13 00:08:29.529884 containerd[1541]: time="2025-08-13T00:08:29.529810320Z" level=info msg="StartContainer for \"8ef428fd33b140dd0668ab66af962629190b7aee4614b8b45f7fe44f02f460a1\"" Aug 13 00:08:29.611787 containerd[1541]: time="2025-08-13T00:08:29.611669873Z" level=info msg="StartContainer for \"8ef428fd33b140dd0668ab66af962629190b7aee4614b8b45f7fe44f02f460a1\" returns successfully" Aug 13 00:08:29.616636 containerd[1541]: time="2025-08-13T00:08:29.613504921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 00:08:30.182252 systemd-networkd[1223]: calie2d63015102: Gained IPv6LL Aug 13 00:08:31.012033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2435900912.mount: Deactivated successfully. Aug 13 00:08:31.047696 containerd[1541]: time="2025-08-13T00:08:31.047646303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:31.048723 containerd[1541]: time="2025-08-13T00:08:31.048519578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 13 00:08:31.049685 containerd[1541]: time="2025-08-13T00:08:31.049549546Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:31.052113 containerd[1541]: time="2025-08-13T00:08:31.051683688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:31.052553 containerd[1541]: time="2025-08-13T00:08:31.052518839Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.438898507s" Aug 13 00:08:31.052553 containerd[1541]: time="2025-08-13T00:08:31.052549442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 13 00:08:31.054730 containerd[1541]: time="2025-08-13T00:08:31.054697465Z" level=info msg="CreateContainer within sandbox \"b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 00:08:31.068608 containerd[1541]: time="2025-08-13T00:08:31.068553688Z" level=info msg="CreateContainer within sandbox \"b74fb87ef83a542b29cd515902383cbc9185b4faf02d4d371323d2b73372592d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9b4774d9c29353c9857629119dd5049278d15c0d0b85c6e3f71d50c159423f9f\"" Aug 13 00:08:31.070067 containerd[1541]: time="2025-08-13T00:08:31.070036015Z" level=info msg="StartContainer for \"9b4774d9c29353c9857629119dd5049278d15c0d0b85c6e3f71d50c159423f9f\"" Aug 13 00:08:31.145923 containerd[1541]: time="2025-08-13T00:08:31.145856568Z" level=info msg="StartContainer for \"9b4774d9c29353c9857629119dd5049278d15c0d0b85c6e3f71d50c159423f9f\" returns successfully" Aug 13 00:08:31.604807 kubelet[2615]: I0813 00:08:31.604731 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7697bbd8d8-jklxr" podStartSLOduration=1.928207349 podStartE2EDuration="4.604714501s" podCreationTimestamp="2025-08-13 00:08:27 +0000 UTC" firstStartedPulling="2025-08-13 00:08:28.376931726 +0000 UTC m=+32.077140971" lastFinishedPulling="2025-08-13 00:08:31.053438838 +0000 UTC m=+34.753648123" observedRunningTime="2025-08-13 00:08:31.602029152 +0000 UTC m=+35.302238437" watchObservedRunningTime="2025-08-13 00:08:31.604714501 +0000 UTC m=+35.304923786" Aug 13 00:08:35.391501 containerd[1541]: time="2025-08-13T00:08:35.391403220Z" level=info msg="StopPodSandbox for \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\"" Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.453 [INFO][4427] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.453 [INFO][4427] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" iface="eth0" netns="/var/run/netns/cni-8f4dcc77-bd59-2b9c-10fb-884e14a11990" Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.453 [INFO][4427] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" iface="eth0" netns="/var/run/netns/cni-8f4dcc77-bd59-2b9c-10fb-884e14a11990" Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.453 [INFO][4427] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" iface="eth0" netns="/var/run/netns/cni-8f4dcc77-bd59-2b9c-10fb-884e14a11990" Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.453 [INFO][4427] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.454 [INFO][4427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.482 [INFO][4436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" HandleID="k8s-pod-network.cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.482 [INFO][4436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.482 [INFO][4436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.497 [WARNING][4436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" HandleID="k8s-pod-network.cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.497 [INFO][4436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" HandleID="k8s-pod-network.cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.499 [INFO][4436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:35.504883 containerd[1541]: 2025-08-13 00:08:35.502 [INFO][4427] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:35.510204 containerd[1541]: time="2025-08-13T00:08:35.505022689Z" level=info msg="TearDown network for sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\" successfully" Aug 13 00:08:35.510204 containerd[1541]: time="2025-08-13T00:08:35.505052811Z" level=info msg="StopPodSandbox for \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\" returns successfully" Aug 13 00:08:35.510204 containerd[1541]: time="2025-08-13T00:08:35.505706141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5wt4m,Uid:edf56ce2-0695-4a38-a297-9fcd045b8bd5,Namespace:calico-system,Attempt:1,}" Aug 13 00:08:35.508119 systemd[1]: run-netns-cni\x2d8f4dcc77\x2dbd59\x2d2b9c\x2d10fb\x2d884e14a11990.mount: Deactivated successfully. Aug 13 00:08:35.667149 systemd-networkd[1223]: calib7c481e3b16: Link UP Aug 13 00:08:35.667624 systemd-networkd[1223]: calib7c481e3b16: Gained carrier Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.555 [INFO][4445] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.570 [INFO][4445] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5wt4m-eth0 csi-node-driver- calico-system edf56ce2-0695-4a38-a297-9fcd045b8bd5 923 0 2025-08-13 00:08:16 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5wt4m eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib7c481e3b16 [] [] }} ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Namespace="calico-system" Pod="csi-node-driver-5wt4m" WorkloadEndpoint="localhost-k8s-csi--node--driver--5wt4m-" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.570 [INFO][4445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Namespace="calico-system" Pod="csi-node-driver-5wt4m" WorkloadEndpoint="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.606 [INFO][4460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" HandleID="k8s-pod-network.ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.606 [INFO][4460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" HandleID="k8s-pod-network.ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3110), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5wt4m", "timestamp":"2025-08-13 00:08:35.606644731 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.606 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.608 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.608 [INFO][4460] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.619 [INFO][4460] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" host="localhost" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.625 [INFO][4460] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.634 [INFO][4460] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.636 [INFO][4460] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.638 [INFO][4460] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.638 [INFO][4460] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" host="localhost" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.639 [INFO][4460] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373 Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.653 [INFO][4460] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" host="localhost" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.662 [INFO][4460] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" host="localhost" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.662 [INFO][4460] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" host="localhost" Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.662 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:35.688866 containerd[1541]: 2025-08-13 00:08:35.662 [INFO][4460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" HandleID="k8s-pod-network.ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:35.689512 containerd[1541]: 2025-08-13 00:08:35.665 [INFO][4445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Namespace="calico-system" Pod="csi-node-driver-5wt4m" WorkloadEndpoint="localhost-k8s-csi--node--driver--5wt4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5wt4m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"edf56ce2-0695-4a38-a297-9fcd045b8bd5", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5wt4m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7c481e3b16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:35.689512 containerd[1541]: 2025-08-13 00:08:35.665 [INFO][4445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Namespace="calico-system" Pod="csi-node-driver-5wt4m" WorkloadEndpoint="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:35.689512 containerd[1541]: 2025-08-13 00:08:35.665 [INFO][4445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7c481e3b16 ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Namespace="calico-system" Pod="csi-node-driver-5wt4m" WorkloadEndpoint="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:35.689512 containerd[1541]: 2025-08-13 00:08:35.668 [INFO][4445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Namespace="calico-system" Pod="csi-node-driver-5wt4m" WorkloadEndpoint="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:35.689512 containerd[1541]: 2025-08-13 00:08:35.669 [INFO][4445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Namespace="calico-system" Pod="csi-node-driver-5wt4m" WorkloadEndpoint="localhost-k8s-csi--node--driver--5wt4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5wt4m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"edf56ce2-0695-4a38-a297-9fcd045b8bd5", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373", Pod:"csi-node-driver-5wt4m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7c481e3b16", MAC:"1a:d6:27:29:b0:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:35.689512 containerd[1541]: 2025-08-13 00:08:35.686 [INFO][4445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373" Namespace="calico-system" Pod="csi-node-driver-5wt4m" WorkloadEndpoint="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:35.703859 containerd[1541]: time="2025-08-13T00:08:35.703693427Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:35.703859 containerd[1541]: time="2025-08-13T00:08:35.703767233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:35.703859 containerd[1541]: time="2025-08-13T00:08:35.703801275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:35.703859 containerd[1541]: time="2025-08-13T00:08:35.703971128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:35.742151 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:08:35.757346 containerd[1541]: time="2025-08-13T00:08:35.757291198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5wt4m,Uid:edf56ce2-0695-4a38-a297-9fcd045b8bd5,Namespace:calico-system,Attempt:1,} returns sandbox id \"ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373\"" Aug 13 00:08:35.761545 containerd[1541]: time="2025-08-13T00:08:35.760328948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 00:08:36.393800 containerd[1541]: time="2025-08-13T00:08:36.392021167Z" level=info msg="StopPodSandbox for \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\"" Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.440 [INFO][4553] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.441 [INFO][4553] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" iface="eth0" netns="/var/run/netns/cni-8b47526e-a6e9-8d78-625e-3d3ac3615f18" Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.441 [INFO][4553] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" iface="eth0" netns="/var/run/netns/cni-8b47526e-a6e9-8d78-625e-3d3ac3615f18" Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.441 [INFO][4553] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" iface="eth0" netns="/var/run/netns/cni-8b47526e-a6e9-8d78-625e-3d3ac3615f18" Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.441 [INFO][4553] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.441 [INFO][4553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.464 [INFO][4562] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" HandleID="k8s-pod-network.8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.465 [INFO][4562] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.465 [INFO][4562] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.473 [WARNING][4562] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" HandleID="k8s-pod-network.8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.474 [INFO][4562] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" HandleID="k8s-pod-network.8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.476 [INFO][4562] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:36.480500 containerd[1541]: 2025-08-13 00:08:36.478 [INFO][4553] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:36.481365 containerd[1541]: time="2025-08-13T00:08:36.481315252Z" level=info msg="TearDown network for sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\" successfully" Aug 13 00:08:36.481365 containerd[1541]: time="2025-08-13T00:08:36.481350735Z" level=info msg="StopPodSandbox for \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\" returns successfully" Aug 13 00:08:36.482004 containerd[1541]: time="2025-08-13T00:08:36.481975781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568ff5db89-pgwqw,Uid:7e176c4c-6fda-4c82-bb44-ae8b69c41d34,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:08:36.508776 systemd[1]: run-netns-cni\x2d8b47526e\x2da6e9\x2d8d78\x2d625e\x2d3d3ac3615f18.mount: Deactivated successfully. Aug 13 00:08:36.661826 systemd-networkd[1223]: cali18c8bb7d3e0: Link UP Aug 13 00:08:36.662787 systemd-networkd[1223]: cali18c8bb7d3e0: Gained carrier Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.551 [INFO][4575] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.566 [INFO][4575] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0 calico-apiserver-568ff5db89- calico-apiserver 7e176c4c-6fda-4c82-bb44-ae8b69c41d34 930 0 2025-08-13 00:08:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:568ff5db89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-568ff5db89-pgwqw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali18c8bb7d3e0 [] [] }} ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-pgwqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.567 [INFO][4575] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-pgwqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.591 [INFO][4584] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" HandleID="k8s-pod-network.0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.591 [INFO][4584] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" HandleID="k8s-pod-network.0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000110aa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-568ff5db89-pgwqw", "timestamp":"2025-08-13 00:08:36.591699007 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.591 [INFO][4584] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.592 [INFO][4584] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.592 [INFO][4584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.607 [INFO][4584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" host="localhost" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.611 [INFO][4584] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.615 [INFO][4584] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.618 [INFO][4584] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.620 [INFO][4584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.620 [INFO][4584] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" host="localhost" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.621 [INFO][4584] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5 Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.633 [INFO][4584] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" host="localhost" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.649 [INFO][4584] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" host="localhost" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.650 [INFO][4584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" host="localhost" Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.650 [INFO][4584] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:36.679343 containerd[1541]: 2025-08-13 00:08:36.650 [INFO][4584] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" HandleID="k8s-pod-network.0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:36.680195 containerd[1541]: 2025-08-13 00:08:36.658 [INFO][4575] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-pgwqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0", GenerateName:"calico-apiserver-568ff5db89-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e176c4c-6fda-4c82-bb44-ae8b69c41d34", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568ff5db89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-568ff5db89-pgwqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18c8bb7d3e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:36.680195 containerd[1541]: 2025-08-13 00:08:36.658 [INFO][4575] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-pgwqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:36.680195 containerd[1541]: 2025-08-13 00:08:36.658 [INFO][4575] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18c8bb7d3e0 ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-pgwqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:36.680195 containerd[1541]: 2025-08-13 00:08:36.662 [INFO][4575] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-pgwqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:36.680195 containerd[1541]: 2025-08-13 00:08:36.664 [INFO][4575] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-pgwqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0", GenerateName:"calico-apiserver-568ff5db89-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e176c4c-6fda-4c82-bb44-ae8b69c41d34", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568ff5db89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5", Pod:"calico-apiserver-568ff5db89-pgwqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18c8bb7d3e0", MAC:"b6:b5:73:f6:4f:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:36.680195 containerd[1541]: 2025-08-13 00:08:36.677 [INFO][4575] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-pgwqw" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:36.697564 containerd[1541]: time="2025-08-13T00:08:36.697440221Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:36.697564 containerd[1541]: time="2025-08-13T00:08:36.697505746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:36.697564 containerd[1541]: time="2025-08-13T00:08:36.697530307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:36.697757 containerd[1541]: time="2025-08-13T00:08:36.697634075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:36.736363 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:08:36.766380 containerd[1541]: time="2025-08-13T00:08:36.766328645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568ff5db89-pgwqw,Uid:7e176c4c-6fda-4c82-bb44-ae8b69c41d34,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5\"" Aug 13 00:08:36.991242 containerd[1541]: time="2025-08-13T00:08:36.991087009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:36.991685 containerd[1541]: time="2025-08-13T00:08:36.991647370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 13 00:08:36.993947 containerd[1541]: time="2025-08-13T00:08:36.993897255Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:36.997516 containerd[1541]: time="2025-08-13T00:08:36.997425835Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:36.998021 containerd[1541]: time="2025-08-13T00:08:36.997962074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.237590322s" Aug 13 00:08:36.998129 containerd[1541]: time="2025-08-13T00:08:36.998021358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 13 00:08:37.000310 containerd[1541]: time="2025-08-13T00:08:37.000135954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:08:37.003002 containerd[1541]: time="2025-08-13T00:08:37.002853150Z" level=info msg="CreateContainer within sandbox \"ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 00:08:37.026967 containerd[1541]: time="2025-08-13T00:08:37.026888830Z" level=info msg="CreateContainer within sandbox \"ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f13fe4fe9e33d8f925fe6f6690bdbe0a426b67ebefbf0b803222aa39dd29db11\"" Aug 13 00:08:37.032367 containerd[1541]: time="2025-08-13T00:08:37.031589807Z" level=info msg="StartContainer for \"f13fe4fe9e33d8f925fe6f6690bdbe0a426b67ebefbf0b803222aa39dd29db11\"" Aug 13 00:08:37.116870 containerd[1541]: time="2025-08-13T00:08:37.116824027Z" level=info msg="StartContainer for \"f13fe4fe9e33d8f925fe6f6690bdbe0a426b67ebefbf0b803222aa39dd29db11\" returns successfully" Aug 13 00:08:37.395106 containerd[1541]: time="2025-08-13T00:08:37.394856406Z" level=info msg="StopPodSandbox for \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\"" Aug 13 00:08:37.395106 containerd[1541]: time="2025-08-13T00:08:37.394915170Z" level=info msg="StopPodSandbox for \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\"" Aug 13 00:08:37.395106 containerd[1541]: time="2025-08-13T00:08:37.394864407Z" level=info msg="StopPodSandbox for \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\"" Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.455 [INFO][4725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.456 [INFO][4725] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" iface="eth0" netns="/var/run/netns/cni-426f3dad-0cd3-450a-b4e6-d368f85536e0" Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.456 [INFO][4725] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" iface="eth0" netns="/var/run/netns/cni-426f3dad-0cd3-450a-b4e6-d368f85536e0" Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.456 [INFO][4725] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" iface="eth0" netns="/var/run/netns/cni-426f3dad-0cd3-450a-b4e6-d368f85536e0" Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.456 [INFO][4725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.456 [INFO][4725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.484 [INFO][4758] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" HandleID="k8s-pod-network.1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.484 [INFO][4758] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.484 [INFO][4758] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.493 [WARNING][4758] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" HandleID="k8s-pod-network.1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.493 [INFO][4758] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" HandleID="k8s-pod-network.1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.494 [INFO][4758] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:37.499266 containerd[1541]: 2025-08-13 00:08:37.497 [INFO][4725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:37.499881 containerd[1541]: time="2025-08-13T00:08:37.499410089Z" level=info msg="TearDown network for sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\" successfully" Aug 13 00:08:37.499881 containerd[1541]: time="2025-08-13T00:08:37.499436331Z" level=info msg="StopPodSandbox for \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\" returns successfully" Aug 13 00:08:37.502166 containerd[1541]: time="2025-08-13T00:08:37.502100321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568ff5db89-898g4,Uid:67aae47b-9ca7-424c-9205-116dd9244930,Namespace:calico-apiserver,Attempt:1,}" Aug 13 00:08:37.509134 systemd[1]: run-netns-cni\x2d426f3dad\x2d0cd3\x2d450a\x2db4e6\x2dd368f85536e0.mount: Deactivated successfully. Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.459 [INFO][4739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.459 [INFO][4739] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" iface="eth0" netns="/var/run/netns/cni-11903b50-b21e-f7e6-73cf-abc5ea91b8db" Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.460 [INFO][4739] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" iface="eth0" netns="/var/run/netns/cni-11903b50-b21e-f7e6-73cf-abc5ea91b8db" Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.460 [INFO][4739] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" iface="eth0" netns="/var/run/netns/cni-11903b50-b21e-f7e6-73cf-abc5ea91b8db" Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.460 [INFO][4739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.460 [INFO][4739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.488 [INFO][4764] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" HandleID="k8s-pod-network.e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.488 [INFO][4764] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.494 [INFO][4764] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.506 [WARNING][4764] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" HandleID="k8s-pod-network.e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.506 [INFO][4764] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" HandleID="k8s-pod-network.e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.508 [INFO][4764] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:37.515381 containerd[1541]: 2025-08-13 00:08:37.512 [INFO][4739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:37.515918 containerd[1541]: time="2025-08-13T00:08:37.515612489Z" level=info msg="TearDown network for sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\" successfully" Aug 13 00:08:37.515918 containerd[1541]: time="2025-08-13T00:08:37.515639810Z" level=info msg="StopPodSandbox for \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\" returns successfully" Aug 13 00:08:37.518469 kubelet[2615]: E0813 00:08:37.516737 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:37.520149 containerd[1541]: time="2025-08-13T00:08:37.520057927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qhll4,Uid:f323d6ae-fc54-4d5d-b0e3-3e41312708c1,Namespace:kube-system,Attempt:1,}" Aug 13 00:08:37.525837 systemd[1]: run-netns-cni\x2d11903b50\x2db21e\x2df7e6\x2d73cf\x2dabc5ea91b8db.mount: Deactivated successfully. Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.463 [INFO][4740] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.463 [INFO][4740] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" iface="eth0" netns="/var/run/netns/cni-5e882701-dff4-2f7d-0ede-3f44f08d1cc3" Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.463 [INFO][4740] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" iface="eth0" netns="/var/run/netns/cni-5e882701-dff4-2f7d-0ede-3f44f08d1cc3" Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.463 [INFO][4740] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" iface="eth0" netns="/var/run/netns/cni-5e882701-dff4-2f7d-0ede-3f44f08d1cc3" Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.463 [INFO][4740] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.463 [INFO][4740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.490 [INFO][4766] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" HandleID="k8s-pod-network.78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.490 [INFO][4766] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.508 [INFO][4766] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.519 [WARNING][4766] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" HandleID="k8s-pod-network.78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.519 [INFO][4766] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" HandleID="k8s-pod-network.78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.522 [INFO][4766] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:37.529286 containerd[1541]: 2025-08-13 00:08:37.526 [INFO][4740] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:37.529643 containerd[1541]: time="2025-08-13T00:08:37.529524524Z" level=info msg="TearDown network for sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\" successfully" Aug 13 00:08:37.529643 containerd[1541]: time="2025-08-13T00:08:37.529551606Z" level=info msg="StopPodSandbox for \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\" returns successfully" Aug 13 00:08:37.530053 kubelet[2615]: E0813 00:08:37.530031 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:37.530620 containerd[1541]: time="2025-08-13T00:08:37.530410388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8n2ps,Uid:d417f09c-4d26-45a8-bedf-3d32ed52c91e,Namespace:kube-system,Attempt:1,}" Aug 13 00:08:37.532590 systemd[1]: run-netns-cni\x2d5e882701\x2ddff4\x2d2f7d\x2d0ede\x2d3f44f08d1cc3.mount: Deactivated successfully. Aug 13 00:08:37.542709 systemd-networkd[1223]: calib7c481e3b16: Gained IPv6LL Aug 13 00:08:37.676347 systemd-networkd[1223]: cali9baae1565c0: Link UP Aug 13 00:08:37.676736 systemd-networkd[1223]: cali9baae1565c0: Gained carrier Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.552 [INFO][4785] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.569 [INFO][4785] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0 calico-apiserver-568ff5db89- calico-apiserver 67aae47b-9ca7-424c-9205-116dd9244930 945 0 2025-08-13 00:08:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:568ff5db89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-568ff5db89-898g4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9baae1565c0 [] [] }} ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-898g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--898g4-" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.569 [INFO][4785] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-898g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.624 [INFO][4828] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" HandleID="k8s-pod-network.c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.624 [INFO][4828] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" HandleID="k8s-pod-network.c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001374d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-568ff5db89-898g4", "timestamp":"2025-08-13 00:08:37.624367872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.624 [INFO][4828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.624 [INFO][4828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.624 [INFO][4828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.634 [INFO][4828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" host="localhost" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.644 [INFO][4828] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.652 [INFO][4828] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.655 [INFO][4828] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.657 [INFO][4828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.657 [INFO][4828] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" host="localhost" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.661 [INFO][4828] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9 Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.665 [INFO][4828] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" host="localhost" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.671 [INFO][4828] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" host="localhost" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.671 [INFO][4828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" host="localhost" Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.671 [INFO][4828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:37.689503 containerd[1541]: 2025-08-13 00:08:37.671 [INFO][4828] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" HandleID="k8s-pod-network.c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:37.690202 containerd[1541]: 2025-08-13 00:08:37.673 [INFO][4785] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-898g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0", GenerateName:"calico-apiserver-568ff5db89-", Namespace:"calico-apiserver", SelfLink:"", UID:"67aae47b-9ca7-424c-9205-116dd9244930", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568ff5db89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-568ff5db89-898g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9baae1565c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:37.690202 containerd[1541]: 2025-08-13 00:08:37.673 [INFO][4785] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-898g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:37.690202 containerd[1541]: 2025-08-13 00:08:37.673 [INFO][4785] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9baae1565c0 ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-898g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:37.690202 containerd[1541]: 2025-08-13 00:08:37.677 [INFO][4785] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-898g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:37.690202 containerd[1541]: 2025-08-13 00:08:37.677 [INFO][4785] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-898g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0", GenerateName:"calico-apiserver-568ff5db89-", Namespace:"calico-apiserver", SelfLink:"", UID:"67aae47b-9ca7-424c-9205-116dd9244930", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568ff5db89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9", Pod:"calico-apiserver-568ff5db89-898g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9baae1565c0", MAC:"f6:c1:a4:6d:f2:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:37.690202 containerd[1541]: 2025-08-13 00:08:37.687 [INFO][4785] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9" Namespace="calico-apiserver" Pod="calico-apiserver-568ff5db89-898g4" WorkloadEndpoint="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:37.706688 containerd[1541]: time="2025-08-13T00:08:37.706414064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:37.706688 containerd[1541]: time="2025-08-13T00:08:37.706467348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:37.706688 containerd[1541]: time="2025-08-13T00:08:37.706478589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:37.706688 containerd[1541]: time="2025-08-13T00:08:37.706563355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:37.738134 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:08:37.780185 systemd-networkd[1223]: cali244be3479a7: Link UP Aug 13 00:08:37.780389 systemd-networkd[1223]: cali244be3479a7: Gained carrier Aug 13 00:08:37.789433 containerd[1541]: time="2025-08-13T00:08:37.789383843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-568ff5db89-898g4,Uid:67aae47b-9ca7-424c-9205-116dd9244930,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9\"" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.577 [INFO][4797] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.604 [INFO][4797] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0 coredns-7c65d6cfc9- kube-system f323d6ae-fc54-4d5d-b0e3-3e41312708c1 946 0 2025-08-13 00:08:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-qhll4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali244be3479a7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhll4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qhll4-" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.605 [INFO][4797] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhll4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.644 [INFO][4837] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" HandleID="k8s-pod-network.e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.644 [INFO][4837] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" HandleID="k8s-pod-network.e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000364fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-qhll4", "timestamp":"2025-08-13 00:08:37.644616081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.645 [INFO][4837] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.671 [INFO][4837] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.671 [INFO][4837] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.735 [INFO][4837] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" host="localhost" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.741 [INFO][4837] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.750 [INFO][4837] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.755 [INFO][4837] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.758 [INFO][4837] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.758 [INFO][4837] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" host="localhost" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.759 [INFO][4837] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.763 [INFO][4837] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" host="localhost" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.770 [INFO][4837] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" host="localhost" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.770 [INFO][4837] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" host="localhost" Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.770 [INFO][4837] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:37.796415 containerd[1541]: 2025-08-13 00:08:37.770 [INFO][4837] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" HandleID="k8s-pod-network.e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:37.797038 containerd[1541]: 2025-08-13 00:08:37.776 [INFO][4797] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhll4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f323d6ae-fc54-4d5d-b0e3-3e41312708c1", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-qhll4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244be3479a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:37.797038 containerd[1541]: 2025-08-13 00:08:37.776 [INFO][4797] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhll4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:37.797038 containerd[1541]: 2025-08-13 00:08:37.776 [INFO][4797] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali244be3479a7 ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhll4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:37.797038 containerd[1541]: 2025-08-13 00:08:37.782 [INFO][4797] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhll4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:37.797038 containerd[1541]: 2025-08-13 00:08:37.782 [INFO][4797] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhll4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f323d6ae-fc54-4d5d-b0e3-3e41312708c1", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c", Pod:"coredns-7c65d6cfc9-qhll4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244be3479a7", MAC:"ae:16:e1:12:15:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:37.797038 containerd[1541]: 2025-08-13 00:08:37.793 [INFO][4797] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c" Namespace="kube-system" Pod="coredns-7c65d6cfc9-qhll4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:37.814750 containerd[1541]: time="2025-08-13T00:08:37.814647371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:37.814750 containerd[1541]: time="2025-08-13T00:08:37.814751898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:37.814918 containerd[1541]: time="2025-08-13T00:08:37.814779300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:37.814918 containerd[1541]: time="2025-08-13T00:08:37.814883308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:37.840368 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:08:37.862612 containerd[1541]: time="2025-08-13T00:08:37.862567640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-qhll4,Uid:f323d6ae-fc54-4d5d-b0e3-3e41312708c1,Namespace:kube-system,Attempt:1,} returns sandbox id \"e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c\"" Aug 13 00:08:37.863343 kubelet[2615]: E0813 00:08:37.863324 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:37.866990 containerd[1541]: time="2025-08-13T00:08:37.866932633Z" level=info msg="CreateContainer within sandbox \"e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:08:37.877251 systemd-networkd[1223]: calic3d7f874188: Link UP Aug 13 00:08:37.878150 systemd-networkd[1223]: calic3d7f874188: Gained carrier Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.585 [INFO][4810] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.604 [INFO][4810] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0 coredns-7c65d6cfc9- kube-system d417f09c-4d26-45a8-bedf-3d32ed52c91e 947 0 2025-08-13 00:08:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-8n2ps eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic3d7f874188 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8n2ps" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8n2ps-" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.604 [INFO][4810] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8n2ps" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.658 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" HandleID="k8s-pod-network.142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.659 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" HandleID="k8s-pod-network.142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400019e940), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-8n2ps", "timestamp":"2025-08-13 00:08:37.65884606 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.659 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.770 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.772 [INFO][4844] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.836 [INFO][4844] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" host="localhost" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.842 [INFO][4844] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.849 [INFO][4844] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.851 [INFO][4844] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.854 [INFO][4844] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.855 [INFO][4844] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" host="localhost" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.856 [INFO][4844] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225 Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.860 [INFO][4844] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" host="localhost" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.866 [INFO][4844] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" host="localhost" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.866 [INFO][4844] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" host="localhost" Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.867 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:37.891638 containerd[1541]: 2025-08-13 00:08:37.867 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" HandleID="k8s-pod-network.142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:37.892464 containerd[1541]: 2025-08-13 00:08:37.873 [INFO][4810] cni-plugin/k8s.go 418: Populated endpoint ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8n2ps" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d417f09c-4d26-45a8-bedf-3d32ed52c91e", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-8n2ps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3d7f874188", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:37.892464 containerd[1541]: 2025-08-13 00:08:37.873 [INFO][4810] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8n2ps" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:37.892464 containerd[1541]: 2025-08-13 00:08:37.874 [INFO][4810] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3d7f874188 ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8n2ps" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:37.892464 containerd[1541]: 2025-08-13 00:08:37.877 [INFO][4810] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8n2ps" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:37.892464 containerd[1541]: 2025-08-13 00:08:37.877 [INFO][4810] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8n2ps" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d417f09c-4d26-45a8-bedf-3d32ed52c91e", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225", Pod:"coredns-7c65d6cfc9-8n2ps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3d7f874188", MAC:"9a:bd:ff:b4:8d:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:37.892464 containerd[1541]: 2025-08-13 00:08:37.887 [INFO][4810] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8n2ps" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:37.916038 containerd[1541]: time="2025-08-13T00:08:37.915822332Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:37.916038 containerd[1541]: time="2025-08-13T00:08:37.915885816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:37.916038 containerd[1541]: time="2025-08-13T00:08:37.915897337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:37.916350 containerd[1541]: time="2025-08-13T00:08:37.915992784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:37.929756 containerd[1541]: time="2025-08-13T00:08:37.929637801Z" level=info msg="CreateContainer within sandbox \"e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1cd252bd78cc425ec61d3cc46e5668c9695896c944892fbc70e766cea1250f0f\"" Aug 13 00:08:37.932492 containerd[1541]: time="2025-08-13T00:08:37.932335834Z" level=info msg="StartContainer for \"1cd252bd78cc425ec61d3cc46e5668c9695896c944892fbc70e766cea1250f0f\"" Aug 13 00:08:37.956640 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:08:38.004939 containerd[1541]: time="2025-08-13T00:08:38.004901220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8n2ps,Uid:d417f09c-4d26-45a8-bedf-3d32ed52c91e,Namespace:kube-system,Attempt:1,} returns sandbox id \"142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225\"" Aug 13 00:08:38.005920 kubelet[2615]: E0813 00:08:38.005893 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:38.008771 containerd[1541]: time="2025-08-13T00:08:38.008585557Z" level=info msg="CreateContainer within sandbox \"142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 00:08:38.086636 containerd[1541]: time="2025-08-13T00:08:38.086488111Z" level=info msg="CreateContainer within sandbox \"142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a1ab1172d9d421964d679d10feb8cc665d97aaf7cef7cdd839d006d4e19fcdc\"" Aug 13 00:08:38.086636 containerd[1541]: time="2025-08-13T00:08:38.086492311Z" level=info msg="StartContainer for \"1cd252bd78cc425ec61d3cc46e5668c9695896c944892fbc70e766cea1250f0f\" returns successfully" Aug 13 00:08:38.087628 containerd[1541]: time="2025-08-13T00:08:38.087603588Z" level=info msg="StartContainer for \"5a1ab1172d9d421964d679d10feb8cc665d97aaf7cef7cdd839d006d4e19fcdc\"" Aug 13 00:08:38.204667 containerd[1541]: time="2025-08-13T00:08:38.204221242Z" level=info msg="StartContainer for \"5a1ab1172d9d421964d679d10feb8cc665d97aaf7cef7cdd839d006d4e19fcdc\" returns successfully" Aug 13 00:08:38.393692 containerd[1541]: time="2025-08-13T00:08:38.392570059Z" level=info msg="StopPodSandbox for \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\"" Aug 13 00:08:38.393692 containerd[1541]: time="2025-08-13T00:08:38.392571259Z" level=info msg="StopPodSandbox for \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\"" Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.467 [INFO][5124] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.468 [INFO][5124] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" iface="eth0" netns="/var/run/netns/cni-1b6ae79a-e40f-1b69-4e0c-ed765d20c8c8" Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.468 [INFO][5124] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" iface="eth0" netns="/var/run/netns/cni-1b6ae79a-e40f-1b69-4e0c-ed765d20c8c8" Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.469 [INFO][5124] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" iface="eth0" netns="/var/run/netns/cni-1b6ae79a-e40f-1b69-4e0c-ed765d20c8c8" Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.469 [INFO][5124] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.469 [INFO][5124] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.500 [INFO][5143] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" HandleID="k8s-pod-network.a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.500 [INFO][5143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.500 [INFO][5143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.551 [WARNING][5143] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" HandleID="k8s-pod-network.a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.551 [INFO][5143] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" HandleID="k8s-pod-network.a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.565 [INFO][5143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:38.600421 containerd[1541]: 2025-08-13 00:08:38.581 [INFO][5124] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:38.608137 containerd[1541]: time="2025-08-13T00:08:38.606594186Z" level=info msg="TearDown network for sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\" successfully" Aug 13 00:08:38.608137 containerd[1541]: time="2025-08-13T00:08:38.606717395Z" level=info msg="StopPodSandbox for \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\" returns successfully" Aug 13 00:08:38.608377 systemd[1]: run-netns-cni\x2d1b6ae79a\x2de40f\x2d1b69\x2d4e0c\x2ded765d20c8c8.mount: Deactivated successfully. Aug 13 00:08:38.611004 containerd[1541]: time="2025-08-13T00:08:38.610968091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6656487d5c-69t8m,Uid:8acdfa87-3615-4f51-932b-63fd53529270,Namespace:calico-system,Attempt:1,}" Aug 13 00:08:38.632222 kubelet[2615]: E0813 00:08:38.632038 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.506 [INFO][5133] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.506 [INFO][5133] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" iface="eth0" netns="/var/run/netns/cni-80226005-5921-a0d6-3487-ad17da6040f1" Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.513 [INFO][5133] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" iface="eth0" netns="/var/run/netns/cni-80226005-5921-a0d6-3487-ad17da6040f1" Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.517 [INFO][5133] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" iface="eth0" netns="/var/run/netns/cni-80226005-5921-a0d6-3487-ad17da6040f1" Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.517 [INFO][5133] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.517 [INFO][5133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.607 [INFO][5153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" HandleID="k8s-pod-network.7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.607 [INFO][5153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.607 [INFO][5153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.619 [WARNING][5153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" HandleID="k8s-pod-network.7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.619 [INFO][5153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" HandleID="k8s-pod-network.7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.621 [INFO][5153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:38.641917 containerd[1541]: 2025-08-13 00:08:38.631 [INFO][5133] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:38.642373 kubelet[2615]: E0813 00:08:38.642335 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:38.642648 containerd[1541]: time="2025-08-13T00:08:38.642550854Z" level=info msg="TearDown network for sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\" successfully" Aug 13 00:08:38.642648 containerd[1541]: time="2025-08-13T00:08:38.642580176Z" level=info msg="StopPodSandbox for \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\" returns successfully" Aug 13 00:08:38.643923 containerd[1541]: time="2025-08-13T00:08:38.643832703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ntl8c,Uid:a308d1f0-5106-4066-87c2-bd682359b04c,Namespace:calico-system,Attempt:1,}" Aug 13 00:08:38.645950 systemd[1]: run-netns-cni\x2d80226005\x2d5921\x2da0d6\x2d3487\x2dad17da6040f1.mount: Deactivated successfully. Aug 13 00:08:38.655913 kubelet[2615]: I0813 00:08:38.655758 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-qhll4" podStartSLOduration=36.655738054 podStartE2EDuration="36.655738054s" podCreationTimestamp="2025-08-13 00:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:08:38.65439508 +0000 UTC m=+42.354604365" watchObservedRunningTime="2025-08-13 00:08:38.655738054 +0000 UTC m=+42.355947339" Aug 13 00:08:38.695316 systemd-networkd[1223]: cali18c8bb7d3e0: Gained IPv6LL Aug 13 00:08:38.700723 kubelet[2615]: I0813 00:08:38.698666 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8n2ps" podStartSLOduration=36.698648007 podStartE2EDuration="36.698648007s" podCreationTimestamp="2025-08-13 00:08:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 00:08:38.674759661 +0000 UTC m=+42.374968946" watchObservedRunningTime="2025-08-13 00:08:38.698648007 +0000 UTC m=+42.398857252" Aug 13 00:08:38.736096 systemd[1]: Started sshd@7-10.0.0.72:22-10.0.0.1:40852.service - OpenSSH per-connection server daemon (10.0.0.1:40852). Aug 13 00:08:38.759289 systemd-networkd[1223]: cali9baae1565c0: Gained IPv6LL Aug 13 00:08:38.824449 sshd[5188]: Accepted publickey for core from 10.0.0.1 port 40852 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:08:38.826291 sshd[5188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:38.833468 systemd-logind[1517]: New session 8 of user core. Aug 13 00:08:38.843482 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 00:08:38.885543 systemd-networkd[1223]: cali0ba470c20b0: Link UP Aug 13 00:08:38.887350 systemd-networkd[1223]: cali0ba470c20b0: Gained carrier Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.746 [INFO][5175] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.787 [INFO][5175] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0 goldmane-58fd7646b9- calico-system a308d1f0-5106-4066-87c2-bd682359b04c 982 0 2025-08-13 00:08:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-ntl8c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0ba470c20b0 [] [] }} ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Namespace="calico-system" Pod="goldmane-58fd7646b9-ntl8c" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ntl8c-" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.787 [INFO][5175] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Namespace="calico-system" Pod="goldmane-58fd7646b9-ntl8c" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.830 [INFO][5198] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" HandleID="k8s-pod-network.53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.830 [INFO][5198] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" HandleID="k8s-pod-network.53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-ntl8c", "timestamp":"2025-08-13 00:08:38.830716458 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.830 [INFO][5198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.830 [INFO][5198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.831 [INFO][5198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.844 [INFO][5198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" host="localhost" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.851 [INFO][5198] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.857 [INFO][5198] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.859 [INFO][5198] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.862 [INFO][5198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.862 [INFO][5198] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" host="localhost" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.865 [INFO][5198] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26 Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.871 [INFO][5198] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" host="localhost" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.878 [INFO][5198] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" host="localhost" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.878 [INFO][5198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" host="localhost" Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.878 [INFO][5198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:38.907299 containerd[1541]: 2025-08-13 00:08:38.878 [INFO][5198] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" HandleID="k8s-pod-network.53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:38.908533 containerd[1541]: 2025-08-13 00:08:38.881 [INFO][5175] cni-plugin/k8s.go 418: Populated endpoint ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Namespace="calico-system" Pod="goldmane-58fd7646b9-ntl8c" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a308d1f0-5106-4066-87c2-bd682359b04c", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-ntl8c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0ba470c20b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:38.908533 containerd[1541]: 2025-08-13 00:08:38.882 [INFO][5175] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Namespace="calico-system" Pod="goldmane-58fd7646b9-ntl8c" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:38.908533 containerd[1541]: 2025-08-13 00:08:38.882 [INFO][5175] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ba470c20b0 ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Namespace="calico-system" Pod="goldmane-58fd7646b9-ntl8c" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:38.908533 containerd[1541]: 2025-08-13 00:08:38.887 [INFO][5175] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Namespace="calico-system" Pod="goldmane-58fd7646b9-ntl8c" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:38.908533 containerd[1541]: 2025-08-13 00:08:38.888 [INFO][5175] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Namespace="calico-system" Pod="goldmane-58fd7646b9-ntl8c" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a308d1f0-5106-4066-87c2-bd682359b04c", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26", Pod:"goldmane-58fd7646b9-ntl8c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0ba470c20b0", MAC:"56:fb:0f:b6:59:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:38.908533 containerd[1541]: 2025-08-13 00:08:38.904 [INFO][5175] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26" Namespace="calico-system" Pod="goldmane-58fd7646b9-ntl8c" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:38.937449 containerd[1541]: time="2025-08-13T00:08:38.937152361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:38.937449 containerd[1541]: time="2025-08-13T00:08:38.937238528Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:38.937449 containerd[1541]: time="2025-08-13T00:08:38.937257329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:38.937449 containerd[1541]: time="2025-08-13T00:08:38.937368457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:38.983783 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:08:39.012397 systemd-networkd[1223]: calie502072579c: Link UP Aug 13 00:08:39.013786 systemd-networkd[1223]: calie502072579c: Gained carrier Aug 13 00:08:39.031266 containerd[1541]: time="2025-08-13T00:08:39.031032978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-ntl8c,Uid:a308d1f0-5106-4066-87c2-bd682359b04c,Namespace:calico-system,Attempt:1,} returns sandbox id \"53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26\"" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.760 [INFO][5164] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.787 [INFO][5164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0 calico-kube-controllers-6656487d5c- calico-system 8acdfa87-3615-4f51-932b-63fd53529270 980 0 2025-08-13 00:08:16 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6656487d5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6656487d5c-69t8m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie502072579c [] [] }} ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Namespace="calico-system" Pod="calico-kube-controllers-6656487d5c-69t8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.788 [INFO][5164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Namespace="calico-system" Pod="calico-kube-controllers-6656487d5c-69t8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.837 [INFO][5200] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" HandleID="k8s-pod-network.0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.837 [INFO][5200] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" HandleID="k8s-pod-network.0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3d60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6656487d5c-69t8m", "timestamp":"2025-08-13 00:08:38.837164708 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.837 [INFO][5200] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.878 [INFO][5200] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.878 [INFO][5200] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.945 [INFO][5200] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" host="localhost" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.952 [INFO][5200] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.960 [INFO][5200] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.963 [INFO][5200] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.969 [INFO][5200] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.969 [INFO][5200] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" host="localhost" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.972 [INFO][5200] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.980 [INFO][5200] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" host="localhost" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.999 [INFO][5200] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" host="localhost" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.999 [INFO][5200] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" host="localhost" Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.999 [INFO][5200] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:39.045792 containerd[1541]: 2025-08-13 00:08:38.999 [INFO][5200] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" HandleID="k8s-pod-network.0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:39.046435 containerd[1541]: 2025-08-13 00:08:39.004 [INFO][5164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Namespace="calico-system" Pod="calico-kube-controllers-6656487d5c-69t8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0", GenerateName:"calico-kube-controllers-6656487d5c-", Namespace:"calico-system", SelfLink:"", UID:"8acdfa87-3615-4f51-932b-63fd53529270", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6656487d5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6656487d5c-69t8m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie502072579c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:39.046435 containerd[1541]: 2025-08-13 00:08:39.004 [INFO][5164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Namespace="calico-system" Pod="calico-kube-controllers-6656487d5c-69t8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:39.046435 containerd[1541]: 2025-08-13 00:08:39.004 [INFO][5164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie502072579c ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Namespace="calico-system" Pod="calico-kube-controllers-6656487d5c-69t8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:39.046435 containerd[1541]: 2025-08-13 00:08:39.008 [INFO][5164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Namespace="calico-system" Pod="calico-kube-controllers-6656487d5c-69t8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:39.046435 containerd[1541]: 2025-08-13 00:08:39.017 [INFO][5164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Namespace="calico-system" Pod="calico-kube-controllers-6656487d5c-69t8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0", GenerateName:"calico-kube-controllers-6656487d5c-", Namespace:"calico-system", SelfLink:"", UID:"8acdfa87-3615-4f51-932b-63fd53529270", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6656487d5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c", Pod:"calico-kube-controllers-6656487d5c-69t8m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie502072579c", MAC:"42:19:2f:c0:dc:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:39.046435 containerd[1541]: 2025-08-13 00:08:39.036 [INFO][5164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c" Namespace="calico-system" Pod="calico-kube-controllers-6656487d5c-69t8m" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:39.142338 containerd[1541]: time="2025-08-13T00:08:39.141866759Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 00:08:39.142338 containerd[1541]: time="2025-08-13T00:08:39.141931523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 00:08:39.142338 containerd[1541]: time="2025-08-13T00:08:39.141946284Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:39.142338 containerd[1541]: time="2025-08-13T00:08:39.142039291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 00:08:39.189838 systemd-resolved[1434]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 00:08:39.228938 containerd[1541]: time="2025-08-13T00:08:39.228893320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6656487d5c-69t8m,Uid:8acdfa87-3615-4f51-932b-63fd53529270,Namespace:calico-system,Attempt:1,} returns sandbox id \"0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c\"" Aug 13 00:08:39.253378 containerd[1541]: time="2025-08-13T00:08:39.253309581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:39.255555 containerd[1541]: time="2025-08-13T00:08:39.255511331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 13 00:08:39.257617 containerd[1541]: time="2025-08-13T00:08:39.257094959Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:39.278666 containerd[1541]: time="2025-08-13T00:08:39.277638156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:39.278804 containerd[1541]: time="2025-08-13T00:08:39.278706509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 2.278521751s" Aug 13 00:08:39.278804 containerd[1541]: time="2025-08-13T00:08:39.278736751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:08:39.280490 containerd[1541]: time="2025-08-13T00:08:39.280455788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 00:08:39.283199 containerd[1541]: time="2025-08-13T00:08:39.283168092Z" level=info msg="CreateContainer within sandbox \"0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:08:39.314436 containerd[1541]: time="2025-08-13T00:08:39.314244047Z" level=info msg="CreateContainer within sandbox \"0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"50b12656ec656c737a440951ac0b5a68b392606af114f7d8eb35cdf7e765c407\"" Aug 13 00:08:39.315481 containerd[1541]: time="2025-08-13T00:08:39.315437768Z" level=info msg="StartContainer for \"50b12656ec656c737a440951ac0b5a68b392606af114f7d8eb35cdf7e765c407\"" Aug 13 00:08:39.322323 sshd[5188]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:39.331032 systemd[1]: sshd@7-10.0.0.72:22-10.0.0.1:40852.service: Deactivated successfully. Aug 13 00:08:39.334675 systemd-networkd[1223]: calic3d7f874188: Gained IPv6LL Aug 13 00:08:39.336363 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 00:08:39.336529 systemd-logind[1517]: Session 8 logged out. Waiting for processes to exit. Aug 13 00:08:39.339227 systemd-logind[1517]: Removed session 8. Aug 13 00:08:39.383687 containerd[1541]: time="2025-08-13T00:08:39.383642048Z" level=info msg="StartContainer for \"50b12656ec656c737a440951ac0b5a68b392606af114f7d8eb35cdf7e765c407\" returns successfully" Aug 13 00:08:39.590304 systemd-networkd[1223]: cali244be3479a7: Gained IPv6LL Aug 13 00:08:39.650122 kubelet[2615]: E0813 00:08:39.650090 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:39.650512 kubelet[2615]: E0813 00:08:39.650174 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:40.110577 kubelet[2615]: I0813 00:08:40.109871 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:08:40.110577 kubelet[2615]: E0813 00:08:40.110241 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:40.137496 kubelet[2615]: I0813 00:08:40.137430 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-568ff5db89-pgwqw" podStartSLOduration=25.62463967 podStartE2EDuration="28.137410874s" podCreationTimestamp="2025-08-13 00:08:12 +0000 UTC" firstStartedPulling="2025-08-13 00:08:36.76747985 +0000 UTC m=+40.467689095" lastFinishedPulling="2025-08-13 00:08:39.280251014 +0000 UTC m=+42.980460299" observedRunningTime="2025-08-13 00:08:39.65630852 +0000 UTC m=+43.356517805" watchObservedRunningTime="2025-08-13 00:08:40.137410874 +0000 UTC m=+43.837620159" Aug 13 00:08:40.295295 systemd-networkd[1223]: calie502072579c: Gained IPv6LL Aug 13 00:08:40.552421 containerd[1541]: time="2025-08-13T00:08:40.552296556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:40.553152 containerd[1541]: time="2025-08-13T00:08:40.553107090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 13 00:08:40.554168 containerd[1541]: time="2025-08-13T00:08:40.554140558Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:40.556997 containerd[1541]: time="2025-08-13T00:08:40.556958665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:40.557795 containerd[1541]: time="2025-08-13T00:08:40.557760959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.277270408s" Aug 13 00:08:40.557839 containerd[1541]: time="2025-08-13T00:08:40.557800201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 13 00:08:40.559319 containerd[1541]: time="2025-08-13T00:08:40.559034083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 00:08:40.560389 containerd[1541]: time="2025-08-13T00:08:40.560345130Z" level=info msg="CreateContainer within sandbox \"ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 00:08:40.575820 containerd[1541]: time="2025-08-13T00:08:40.575683509Z" level=info msg="CreateContainer within sandbox \"ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5138f7f31785436d14f95b1258d6604ef2814d027f307c9ace18a94e8fbffd4f\"" Aug 13 00:08:40.577546 containerd[1541]: time="2025-08-13T00:08:40.576386556Z" level=info msg="StartContainer for \"5138f7f31785436d14f95b1258d6604ef2814d027f307c9ace18a94e8fbffd4f\"" Aug 13 00:08:40.643679 containerd[1541]: time="2025-08-13T00:08:40.641769100Z" level=info msg="StartContainer for \"5138f7f31785436d14f95b1258d6604ef2814d027f307c9ace18a94e8fbffd4f\" returns successfully" Aug 13 00:08:40.657622 kubelet[2615]: E0813 00:08:40.657584 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:40.660029 kubelet[2615]: I0813 00:08:40.658182 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:08:40.660029 kubelet[2615]: E0813 00:08:40.658272 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:40.660029 kubelet[2615]: E0813 00:08:40.658561 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:08:40.832209 containerd[1541]: time="2025-08-13T00:08:40.831737240Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:40.833011 containerd[1541]: time="2025-08-13T00:08:40.832978442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 00:08:40.836920 containerd[1541]: time="2025-08-13T00:08:40.836868181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 277.804255ms" Aug 13 00:08:40.836920 containerd[1541]: time="2025-08-13T00:08:40.836914824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 13 00:08:40.839297 containerd[1541]: time="2025-08-13T00:08:40.839258900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 00:08:40.842787 containerd[1541]: time="2025-08-13T00:08:40.842732010Z" level=info msg="CreateContainer within sandbox \"c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 00:08:40.868565 containerd[1541]: time="2025-08-13T00:08:40.868514923Z" level=info msg="CreateContainer within sandbox \"c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8e5277c1c163b09a85098f9642568baf5f14d18ac494000a821f61feeed7b20f\"" Aug 13 00:08:40.870111 containerd[1541]: time="2025-08-13T00:08:40.870053065Z" level=info msg="StartContainer for \"8e5277c1c163b09a85098f9642568baf5f14d18ac494000a821f61feeed7b20f\"" Aug 13 00:08:40.873006 systemd-networkd[1223]: cali0ba470c20b0: Gained IPv6LL Aug 13 00:08:40.952173 kernel: bpftool[5521]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 00:08:41.001967 containerd[1541]: time="2025-08-13T00:08:41.001841659Z" level=info msg="StartContainer for \"8e5277c1c163b09a85098f9642568baf5f14d18ac494000a821f61feeed7b20f\" returns successfully" Aug 13 00:08:41.189597 systemd-networkd[1223]: vxlan.calico: Link UP Aug 13 00:08:41.189607 systemd-networkd[1223]: vxlan.calico: Gained carrier Aug 13 00:08:41.515176 kubelet[2615]: I0813 00:08:41.514567 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5wt4m" podStartSLOduration=20.715539149 podStartE2EDuration="25.514547909s" podCreationTimestamp="2025-08-13 00:08:16 +0000 UTC" firstStartedPulling="2025-08-13 00:08:35.759867753 +0000 UTC m=+39.460076998" lastFinishedPulling="2025-08-13 00:08:40.558876473 +0000 UTC m=+44.259085758" observedRunningTime="2025-08-13 00:08:40.675090513 +0000 UTC m=+44.375299798" watchObservedRunningTime="2025-08-13 00:08:41.514547909 +0000 UTC m=+45.214757194" Aug 13 00:08:41.526291 kubelet[2615]: I0813 00:08:41.526251 2615 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 00:08:41.531090 kubelet[2615]: I0813 00:08:41.528021 2615 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 00:08:41.674455 kubelet[2615]: I0813 00:08:41.674359 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-568ff5db89-898g4" podStartSLOduration=26.632381351 podStartE2EDuration="29.674342165s" podCreationTimestamp="2025-08-13 00:08:12 +0000 UTC" firstStartedPulling="2025-08-13 00:08:37.79675845 +0000 UTC m=+41.496967735" lastFinishedPulling="2025-08-13 00:08:40.838719304 +0000 UTC m=+44.538928549" observedRunningTime="2025-08-13 00:08:41.673474629 +0000 UTC m=+45.373683954" watchObservedRunningTime="2025-08-13 00:08:41.674342165 +0000 UTC m=+45.374551410" Aug 13 00:08:42.389215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4159456522.mount: Deactivated successfully. Aug 13 00:08:42.664004 kubelet[2615]: I0813 00:08:42.663902 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:08:42.823483 containerd[1541]: time="2025-08-13T00:08:42.823418056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:42.825804 containerd[1541]: time="2025-08-13T00:08:42.825767925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 13 00:08:42.827113 containerd[1541]: time="2025-08-13T00:08:42.827059567Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:42.835331 containerd[1541]: time="2025-08-13T00:08:42.835163842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.995862859s" Aug 13 00:08:42.835331 containerd[1541]: time="2025-08-13T00:08:42.835219246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 13 00:08:42.835572 containerd[1541]: time="2025-08-13T00:08:42.835277049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:42.839674 containerd[1541]: time="2025-08-13T00:08:42.839246141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 00:08:42.841295 containerd[1541]: time="2025-08-13T00:08:42.841260149Z" level=info msg="CreateContainer within sandbox \"53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 00:08:42.860207 containerd[1541]: time="2025-08-13T00:08:42.860161070Z" level=info msg="CreateContainer within sandbox \"53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"e02aa57f94a3c465e2ddad62a406135bb30953627a77a59ce897a588392f7054\"" Aug 13 00:08:42.860745 containerd[1541]: time="2025-08-13T00:08:42.860714985Z" level=info msg="StartContainer for \"e02aa57f94a3c465e2ddad62a406135bb30953627a77a59ce897a588392f7054\"" Aug 13 00:08:42.953169 containerd[1541]: time="2025-08-13T00:08:42.951611799Z" level=info msg="StartContainer for \"e02aa57f94a3c465e2ddad62a406135bb30953627a77a59ce897a588392f7054\" returns successfully" Aug 13 00:08:43.110235 systemd-networkd[1223]: vxlan.calico: Gained IPv6LL Aug 13 00:08:43.680890 kubelet[2615]: I0813 00:08:43.680767 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-ntl8c" podStartSLOduration=23.876200382 podStartE2EDuration="27.680747977s" podCreationTimestamp="2025-08-13 00:08:16 +0000 UTC" firstStartedPulling="2025-08-13 00:08:39.034500734 +0000 UTC m=+42.734710019" lastFinishedPulling="2025-08-13 00:08:42.839048329 +0000 UTC m=+46.539257614" observedRunningTime="2025-08-13 00:08:43.680386994 +0000 UTC m=+47.380596279" watchObservedRunningTime="2025-08-13 00:08:43.680747977 +0000 UTC m=+47.380957222" Aug 13 00:08:44.334360 systemd[1]: Started sshd@8-10.0.0.72:22-10.0.0.1:60016.service - OpenSSH per-connection server daemon (10.0.0.1:60016). Aug 13 00:08:44.383706 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 60016 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:08:44.385903 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:44.392653 systemd-logind[1517]: New session 9 of user core. Aug 13 00:08:44.401626 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 00:08:44.671306 kubelet[2615]: I0813 00:08:44.671198 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:08:44.775523 sshd[5679]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:44.781419 systemd[1]: sshd@8-10.0.0.72:22-10.0.0.1:60016.service: Deactivated successfully. Aug 13 00:08:44.783156 systemd-logind[1517]: Session 9 logged out. Waiting for processes to exit. Aug 13 00:08:44.783592 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 00:08:44.787331 systemd-logind[1517]: Removed session 9. Aug 13 00:08:44.847759 containerd[1541]: time="2025-08-13T00:08:44.847703033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:44.848575 containerd[1541]: time="2025-08-13T00:08:44.848503762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 13 00:08:44.850889 containerd[1541]: time="2025-08-13T00:08:44.850848065Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:44.853770 containerd[1541]: time="2025-08-13T00:08:44.853721840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 00:08:44.854345 containerd[1541]: time="2025-08-13T00:08:44.854316636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.015031973s" Aug 13 00:08:44.854381 containerd[1541]: time="2025-08-13T00:08:44.854352918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 13 00:08:44.873249 containerd[1541]: time="2025-08-13T00:08:44.873202667Z" level=info msg="CreateContainer within sandbox \"0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 00:08:44.899621 containerd[1541]: time="2025-08-13T00:08:44.899569755Z" level=info msg="CreateContainer within sandbox \"0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3a5c6d8f0e52efb144cc9ee2dcc28571cad97b02efa8fb41b3375694684a435c\"" Aug 13 00:08:44.900385 containerd[1541]: time="2025-08-13T00:08:44.900140230Z" level=info msg="StartContainer for \"3a5c6d8f0e52efb144cc9ee2dcc28571cad97b02efa8fb41b3375694684a435c\"" Aug 13 00:08:44.974219 containerd[1541]: time="2025-08-13T00:08:44.974101338Z" level=info msg="StartContainer for \"3a5c6d8f0e52efb144cc9ee2dcc28571cad97b02efa8fb41b3375694684a435c\" returns successfully" Aug 13 00:08:45.282257 kubelet[2615]: I0813 00:08:45.282139 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:08:45.699115 kubelet[2615]: I0813 00:08:45.698517 2615 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6656487d5c-69t8m" podStartSLOduration=24.073031339 podStartE2EDuration="29.698496769s" podCreationTimestamp="2025-08-13 00:08:16 +0000 UTC" firstStartedPulling="2025-08-13 00:08:39.230141645 +0000 UTC m=+42.930350930" lastFinishedPulling="2025-08-13 00:08:44.855607075 +0000 UTC m=+48.555816360" observedRunningTime="2025-08-13 00:08:45.698125627 +0000 UTC m=+49.398334912" watchObservedRunningTime="2025-08-13 00:08:45.698496769 +0000 UTC m=+49.398706014" Aug 13 00:08:49.794532 systemd[1]: Started sshd@9-10.0.0.72:22-10.0.0.1:60028.service - OpenSSH per-connection server daemon (10.0.0.1:60028). Aug 13 00:08:49.850196 sshd[5794]: Accepted publickey for core from 10.0.0.1 port 60028 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:08:49.852595 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:49.858261 systemd-logind[1517]: New session 10 of user core. Aug 13 00:08:49.867498 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 00:08:50.256225 sshd[5794]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:50.264399 systemd[1]: Started sshd@10-10.0.0.72:22-10.0.0.1:60036.service - OpenSSH per-connection server daemon (10.0.0.1:60036). Aug 13 00:08:50.264844 systemd[1]: sshd@9-10.0.0.72:22-10.0.0.1:60028.service: Deactivated successfully. Aug 13 00:08:50.268819 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 00:08:50.270169 systemd-logind[1517]: Session 10 logged out. Waiting for processes to exit. Aug 13 00:08:50.271867 systemd-logind[1517]: Removed session 10. Aug 13 00:08:50.303885 sshd[5807]: Accepted publickey for core from 10.0.0.1 port 60036 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:08:50.305413 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:50.310231 systemd-logind[1517]: New session 11 of user core. Aug 13 00:08:50.317530 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 00:08:50.571270 sshd[5807]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:50.580452 systemd[1]: Started sshd@11-10.0.0.72:22-10.0.0.1:60042.service - OpenSSH per-connection server daemon (10.0.0.1:60042). Aug 13 00:08:50.585331 systemd[1]: sshd@10-10.0.0.72:22-10.0.0.1:60036.service: Deactivated successfully. Aug 13 00:08:50.592773 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 00:08:50.595416 systemd-logind[1517]: Session 11 logged out. Waiting for processes to exit. Aug 13 00:08:50.596547 systemd-logind[1517]: Removed session 11. Aug 13 00:08:50.644306 sshd[5820]: Accepted publickey for core from 10.0.0.1 port 60042 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:08:50.646174 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:50.651295 systemd-logind[1517]: New session 12 of user core. Aug 13 00:08:50.661516 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 00:08:50.852014 sshd[5820]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:50.856378 systemd[1]: sshd@11-10.0.0.72:22-10.0.0.1:60042.service: Deactivated successfully. Aug 13 00:08:50.859687 systemd-logind[1517]: Session 12 logged out. Waiting for processes to exit. Aug 13 00:08:50.860712 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 00:08:50.862110 systemd-logind[1517]: Removed session 12. Aug 13 00:08:51.201142 kubelet[2615]: I0813 00:08:51.200223 2615 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 00:08:55.868785 systemd[1]: Started sshd@12-10.0.0.72:22-10.0.0.1:33942.service - OpenSSH per-connection server daemon (10.0.0.1:33942). Aug 13 00:08:55.906116 sshd[5894]: Accepted publickey for core from 10.0.0.1 port 33942 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:08:55.907649 sshd[5894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:55.912204 systemd-logind[1517]: New session 13 of user core. Aug 13 00:08:55.922436 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 00:08:56.083619 sshd[5894]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:56.095431 systemd[1]: Started sshd@13-10.0.0.72:22-10.0.0.1:33948.service - OpenSSH per-connection server daemon (10.0.0.1:33948). Aug 13 00:08:56.095876 systemd[1]: sshd@12-10.0.0.72:22-10.0.0.1:33942.service: Deactivated successfully. Aug 13 00:08:56.098942 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 00:08:56.099444 systemd-logind[1517]: Session 13 logged out. Waiting for processes to exit. Aug 13 00:08:56.101854 systemd-logind[1517]: Removed session 13. Aug 13 00:08:56.138975 sshd[5906]: Accepted publickey for core from 10.0.0.1 port 33948 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:08:56.140525 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:56.144801 systemd-logind[1517]: New session 14 of user core. Aug 13 00:08:56.160452 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 00:08:56.386223 containerd[1541]: time="2025-08-13T00:08:56.385551091Z" level=info msg="StopPodSandbox for \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\"" Aug 13 00:08:56.465576 sshd[5906]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:56.475954 systemd[1]: Started sshd@14-10.0.0.72:22-10.0.0.1:33954.service - OpenSSH per-connection server daemon (10.0.0.1:33954). Aug 13 00:08:56.476437 systemd[1]: sshd@13-10.0.0.72:22-10.0.0.1:33948.service: Deactivated successfully. Aug 13 00:08:56.480634 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 00:08:56.480641 systemd-logind[1517]: Session 14 logged out. Waiting for processes to exit. Aug 13 00:08:56.484379 systemd-logind[1517]: Removed session 14. Aug 13 00:08:56.526657 sshd[5939]: Accepted publickey for core from 10.0.0.1 port 33954 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:08:56.527549 sshd[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:56.537061 systemd-logind[1517]: New session 15 of user core. Aug 13 00:08:56.545451 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.538 [WARNING][5933] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f323d6ae-fc54-4d5d-b0e3-3e41312708c1", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c", Pod:"coredns-7c65d6cfc9-qhll4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244be3479a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.540 [INFO][5933] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.540 [INFO][5933] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" iface="eth0" netns="" Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.540 [INFO][5933] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.540 [INFO][5933] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.572 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" HandleID="k8s-pod-network.e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.572 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.572 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.582 [WARNING][5967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" HandleID="k8s-pod-network.e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.582 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" HandleID="k8s-pod-network.e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.583 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:56.588816 containerd[1541]: 2025-08-13 00:08:56.586 [INFO][5933] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:56.589334 containerd[1541]: time="2025-08-13T00:08:56.588846689Z" level=info msg="TearDown network for sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\" successfully" Aug 13 00:08:56.589334 containerd[1541]: time="2025-08-13T00:08:56.588875170Z" level=info msg="StopPodSandbox for \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\" returns successfully" Aug 13 00:08:56.589814 containerd[1541]: time="2025-08-13T00:08:56.589776016Z" level=info msg="RemovePodSandbox for \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\"" Aug 13 00:08:56.592510 containerd[1541]: time="2025-08-13T00:08:56.592459433Z" level=info msg="Forcibly stopping sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\"" Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.653 [WARNING][5992] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f323d6ae-fc54-4d5d-b0e3-3e41312708c1", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e68e056681c7676731d2f2b371f2b4466493718a40e972282f8e8a1ae7dbcb3c", Pod:"coredns-7c65d6cfc9-qhll4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali244be3479a7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.654 [INFO][5992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.654 [INFO][5992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" iface="eth0" netns="" Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.654 [INFO][5992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.654 [INFO][5992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.677 [INFO][6001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" HandleID="k8s-pod-network.e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.677 [INFO][6001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.677 [INFO][6001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.686 [WARNING][6001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" HandleID="k8s-pod-network.e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.686 [INFO][6001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" HandleID="k8s-pod-network.e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Workload="localhost-k8s-coredns--7c65d6cfc9--qhll4-eth0" Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.688 [INFO][6001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:56.691911 containerd[1541]: 2025-08-13 00:08:56.690 [INFO][5992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d" Aug 13 00:08:56.692419 containerd[1541]: time="2025-08-13T00:08:56.691958342Z" level=info msg="TearDown network for sandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\" successfully" Aug 13 00:08:56.744225 containerd[1541]: time="2025-08-13T00:08:56.743857786Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:08:56.744225 containerd[1541]: time="2025-08-13T00:08:56.743951231Z" level=info msg="RemovePodSandbox \"e0d70ce4c94bb53c697abd7f8d3e565abe7fd6df9db0f2f553fd2cd77d0b530d\" returns successfully" Aug 13 00:08:56.745860 containerd[1541]: time="2025-08-13T00:08:56.745827766Z" level=info msg="StopPodSandbox for \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\"" Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.789 [WARNING][6019] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0", GenerateName:"calico-kube-controllers-6656487d5c-", Namespace:"calico-system", SelfLink:"", UID:"8acdfa87-3615-4f51-932b-63fd53529270", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6656487d5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c", Pod:"calico-kube-controllers-6656487d5c-69t8m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie502072579c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.789 [INFO][6019] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.789 [INFO][6019] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" iface="eth0" netns="" Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.789 [INFO][6019] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.789 [INFO][6019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.834 [INFO][6027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" HandleID="k8s-pod-network.a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.834 [INFO][6027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.834 [INFO][6027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.843 [WARNING][6027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" HandleID="k8s-pod-network.a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.843 [INFO][6027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" HandleID="k8s-pod-network.a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.845 [INFO][6027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:56.849190 containerd[1541]: 2025-08-13 00:08:56.846 [INFO][6019] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:56.849609 containerd[1541]: time="2025-08-13T00:08:56.849239995Z" level=info msg="TearDown network for sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\" successfully" Aug 13 00:08:56.849609 containerd[1541]: time="2025-08-13T00:08:56.849265076Z" level=info msg="StopPodSandbox for \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\" returns successfully" Aug 13 00:08:56.849743 containerd[1541]: time="2025-08-13T00:08:56.849692658Z" level=info msg="RemovePodSandbox for \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\"" Aug 13 00:08:56.849743 containerd[1541]: time="2025-08-13T00:08:56.849734500Z" level=info msg="Forcibly stopping sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\"" Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.894 [WARNING][6045] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0", GenerateName:"calico-kube-controllers-6656487d5c-", Namespace:"calico-system", SelfLink:"", UID:"8acdfa87-3615-4f51-932b-63fd53529270", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6656487d5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0d1855fc0cb945c17756bead95125d04e4e123a421bf0257dfd0e8e5b186846c", Pod:"calico-kube-controllers-6656487d5c-69t8m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie502072579c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.894 [INFO][6045] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.894 [INFO][6045] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" iface="eth0" netns="" Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.894 [INFO][6045] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.894 [INFO][6045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.934 [INFO][6055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" HandleID="k8s-pod-network.a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.934 [INFO][6055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.934 [INFO][6055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.984 [WARNING][6055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" HandleID="k8s-pod-network.a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.984 [INFO][6055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" HandleID="k8s-pod-network.a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Workload="localhost-k8s-calico--kube--controllers--6656487d5c--69t8m-eth0" Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:56.992 [INFO][6055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.005595 containerd[1541]: 2025-08-13 00:08:57.001 [INFO][6045] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32" Aug 13 00:08:57.005595 containerd[1541]: time="2025-08-13T00:08:57.005408029Z" level=info msg="TearDown network for sandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\" successfully" Aug 13 00:08:57.014161 containerd[1541]: time="2025-08-13T00:08:57.013816733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:08:57.014161 containerd[1541]: time="2025-08-13T00:08:57.013902217Z" level=info msg="RemovePodSandbox \"a5f514df6b7a31291b872e989dcf62bb6610bc875229ac95d82b22026d913b32\" returns successfully" Aug 13 00:08:57.014446 containerd[1541]: time="2025-08-13T00:08:57.014387562Z" level=info msg="StopPodSandbox for \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\"" Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.097 [WARNING][6075] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" WorkloadEndpoint="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.100 [INFO][6075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.100 [INFO][6075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" iface="eth0" netns="" Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.100 [INFO][6075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.100 [INFO][6075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.130 [INFO][6084] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" HandleID="k8s-pod-network.d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Workload="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.131 [INFO][6084] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.131 [INFO][6084] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.140 [WARNING][6084] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" HandleID="k8s-pod-network.d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Workload="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.140 [INFO][6084] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" HandleID="k8s-pod-network.d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Workload="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.142 [INFO][6084] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.146324 containerd[1541]: 2025-08-13 00:08:57.144 [INFO][6075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:57.146324 containerd[1541]: time="2025-08-13T00:08:57.146204047Z" level=info msg="TearDown network for sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\" successfully" Aug 13 00:08:57.146324 containerd[1541]: time="2025-08-13T00:08:57.146229568Z" level=info msg="StopPodSandbox for \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\" returns successfully" Aug 13 00:08:57.147166 containerd[1541]: time="2025-08-13T00:08:57.146866720Z" level=info msg="RemovePodSandbox for \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\"" Aug 13 00:08:57.147166 containerd[1541]: time="2025-08-13T00:08:57.146896562Z" level=info msg="Forcibly stopping sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\"" Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.185 [WARNING][6102] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" WorkloadEndpoint="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.186 [INFO][6102] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.186 [INFO][6102] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" iface="eth0" netns="" Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.186 [INFO][6102] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.186 [INFO][6102] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.205 [INFO][6111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" HandleID="k8s-pod-network.d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Workload="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.205 [INFO][6111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.205 [INFO][6111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.216 [WARNING][6111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" HandleID="k8s-pod-network.d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Workload="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.216 [INFO][6111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" HandleID="k8s-pod-network.d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Workload="localhost-k8s-whisker--c5fd5c744--nscz9-eth0" Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.218 [INFO][6111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.222662 containerd[1541]: 2025-08-13 00:08:57.220 [INFO][6102] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a" Aug 13 00:08:57.224121 containerd[1541]: time="2025-08-13T00:08:57.223104964Z" level=info msg="TearDown network for sandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\" successfully" Aug 13 00:08:57.226247 containerd[1541]: time="2025-08-13T00:08:57.226214521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:08:57.226405 containerd[1541]: time="2025-08-13T00:08:57.226388369Z" level=info msg="RemovePodSandbox \"d1014ef40e7ab0dad420e85ff03f9ac76744d48e0ede138b6fab6ef8fa60e93a\" returns successfully" Aug 13 00:08:57.227052 containerd[1541]: time="2025-08-13T00:08:57.227033002Z" level=info msg="StopPodSandbox for \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\"" Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.260 [WARNING][6129] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d417f09c-4d26-45a8-bedf-3d32ed52c91e", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225", Pod:"coredns-7c65d6cfc9-8n2ps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3d7f874188", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.260 [INFO][6129] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.260 [INFO][6129] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" iface="eth0" netns="" Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.260 [INFO][6129] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.260 [INFO][6129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.290 [INFO][6138] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" HandleID="k8s-pod-network.78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.290 [INFO][6138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.290 [INFO][6138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.300 [WARNING][6138] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" HandleID="k8s-pod-network.78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.300 [INFO][6138] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" HandleID="k8s-pod-network.78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.303 [INFO][6138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.315298 containerd[1541]: 2025-08-13 00:08:57.309 [INFO][6129] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:57.315769 containerd[1541]: time="2025-08-13T00:08:57.315742234Z" level=info msg="TearDown network for sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\" successfully" Aug 13 00:08:57.315821 containerd[1541]: time="2025-08-13T00:08:57.315809437Z" level=info msg="StopPodSandbox for \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\" returns successfully" Aug 13 00:08:57.317768 containerd[1541]: time="2025-08-13T00:08:57.317731934Z" level=info msg="RemovePodSandbox for \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\"" Aug 13 00:08:57.317850 containerd[1541]: time="2025-08-13T00:08:57.317771136Z" level=info msg="Forcibly stopping sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\"" Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.378 [WARNING][6155] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"d417f09c-4d26-45a8-bedf-3d32ed52c91e", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"142b44cd1cefb7d5a7d8b8d21e31d4deb79fe496468c8ddc9114aeb657f65225", Pod:"coredns-7c65d6cfc9-8n2ps", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic3d7f874188", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.378 [INFO][6155] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.378 [INFO][6155] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" iface="eth0" netns="" Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.378 [INFO][6155] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.378 [INFO][6155] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.403 [INFO][6164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" HandleID="k8s-pod-network.78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.404 [INFO][6164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.404 [INFO][6164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.412 [WARNING][6164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" HandleID="k8s-pod-network.78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.412 [INFO][6164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" HandleID="k8s-pod-network.78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Workload="localhost-k8s-coredns--7c65d6cfc9--8n2ps-eth0" Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.414 [INFO][6164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.419183 containerd[1541]: 2025-08-13 00:08:57.416 [INFO][6155] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea" Aug 13 00:08:57.419806 containerd[1541]: time="2025-08-13T00:08:57.419230971Z" level=info msg="TearDown network for sandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\" successfully" Aug 13 00:08:57.422248 containerd[1541]: time="2025-08-13T00:08:57.422209241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:08:57.422310 containerd[1541]: time="2025-08-13T00:08:57.422274044Z" level=info msg="RemovePodSandbox \"78ce004e6dbb04423fde2bc437a91ed50fad26ba737e2f2cc21254c4f190acea\" returns successfully" Aug 13 00:08:57.422756 containerd[1541]: time="2025-08-13T00:08:57.422722507Z" level=info msg="StopPodSandbox for \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\"" Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.460 [WARNING][6182] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0", GenerateName:"calico-apiserver-568ff5db89-", Namespace:"calico-apiserver", SelfLink:"", UID:"67aae47b-9ca7-424c-9205-116dd9244930", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568ff5db89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9", Pod:"calico-apiserver-568ff5db89-898g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9baae1565c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.460 [INFO][6182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.460 [INFO][6182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" iface="eth0" netns="" Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.460 [INFO][6182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.460 [INFO][6182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.483 [INFO][6190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" HandleID="k8s-pod-network.1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.484 [INFO][6190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.484 [INFO][6190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.493 [WARNING][6190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" HandleID="k8s-pod-network.1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.493 [INFO][6190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" HandleID="k8s-pod-network.1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.495 [INFO][6190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.501165 containerd[1541]: 2025-08-13 00:08:57.498 [INFO][6182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:57.502017 containerd[1541]: time="2025-08-13T00:08:57.501221944Z" level=info msg="TearDown network for sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\" successfully" Aug 13 00:08:57.502017 containerd[1541]: time="2025-08-13T00:08:57.501247146Z" level=info msg="StopPodSandbox for \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\" returns successfully" Aug 13 00:08:57.502472 containerd[1541]: time="2025-08-13T00:08:57.502442526Z" level=info msg="RemovePodSandbox for \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\"" Aug 13 00:08:57.502570 containerd[1541]: time="2025-08-13T00:08:57.502477368Z" level=info msg="Forcibly stopping sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\"" Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.550 [WARNING][6209] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0", GenerateName:"calico-apiserver-568ff5db89-", Namespace:"calico-apiserver", SelfLink:"", UID:"67aae47b-9ca7-424c-9205-116dd9244930", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568ff5db89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c34b8cdf305cd8b7d2d07a4e5a1df0a40ad61f827d423c46cbadbc1239e2fcb9", Pod:"calico-apiserver-568ff5db89-898g4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9baae1565c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.550 [INFO][6209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.550 [INFO][6209] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" iface="eth0" netns="" Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.550 [INFO][6209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.550 [INFO][6209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.569 [INFO][6218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" HandleID="k8s-pod-network.1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.569 [INFO][6218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.569 [INFO][6218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.578 [WARNING][6218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" HandleID="k8s-pod-network.1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.578 [INFO][6218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" HandleID="k8s-pod-network.1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Workload="localhost-k8s-calico--apiserver--568ff5db89--898g4-eth0" Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.579 [INFO][6218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.584464 containerd[1541]: 2025-08-13 00:08:57.581 [INFO][6209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe" Aug 13 00:08:57.584464 containerd[1541]: time="2025-08-13T00:08:57.584361056Z" level=info msg="TearDown network for sandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\" successfully" Aug 13 00:08:57.588102 containerd[1541]: time="2025-08-13T00:08:57.587807509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:08:57.588102 containerd[1541]: time="2025-08-13T00:08:57.587883753Z" level=info msg="RemovePodSandbox \"1c5e00e6e5e8a298bd6c56bdc7962bceb91cf37dee211cc1b2fdd703dfefb1fe\" returns successfully" Aug 13 00:08:57.588378 containerd[1541]: time="2025-08-13T00:08:57.588354377Z" level=info msg="StopPodSandbox for \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\"" Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.624 [WARNING][6235] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0", GenerateName:"calico-apiserver-568ff5db89-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e176c4c-6fda-4c82-bb44-ae8b69c41d34", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568ff5db89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5", Pod:"calico-apiserver-568ff5db89-pgwqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18c8bb7d3e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.625 [INFO][6235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.625 [INFO][6235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" iface="eth0" netns="" Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.625 [INFO][6235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.625 [INFO][6235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.647 [INFO][6244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" HandleID="k8s-pod-network.8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.648 [INFO][6244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.648 [INFO][6244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.656 [WARNING][6244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" HandleID="k8s-pod-network.8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.656 [INFO][6244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" HandleID="k8s-pod-network.8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.657 [INFO][6244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.661990 containerd[1541]: 2025-08-13 00:08:57.659 [INFO][6235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:57.662630 containerd[1541]: time="2025-08-13T00:08:57.662115495Z" level=info msg="TearDown network for sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\" successfully" Aug 13 00:08:57.662630 containerd[1541]: time="2025-08-13T00:08:57.662504115Z" level=info msg="StopPodSandbox for \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\" returns successfully" Aug 13 00:08:57.663024 containerd[1541]: time="2025-08-13T00:08:57.662956618Z" level=info msg="RemovePodSandbox for \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\"" Aug 13 00:08:57.663024 containerd[1541]: time="2025-08-13T00:08:57.663001180Z" level=info msg="Forcibly stopping sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\"" Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.702 [WARNING][6263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0", GenerateName:"calico-apiserver-568ff5db89-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e176c4c-6fda-4c82-bb44-ae8b69c41d34", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"568ff5db89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ebb1eb18de2d999ca2845090ada59dbf58f4a20932ea73121a7c28f56aadca5", Pod:"calico-apiserver-568ff5db89-pgwqw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18c8bb7d3e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.702 [INFO][6263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.702 [INFO][6263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" iface="eth0" netns="" Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.702 [INFO][6263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.702 [INFO][6263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.728 [INFO][6271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" HandleID="k8s-pod-network.8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.728 [INFO][6271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.728 [INFO][6271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.738 [WARNING][6271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" HandleID="k8s-pod-network.8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.738 [INFO][6271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" HandleID="k8s-pod-network.8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Workload="localhost-k8s-calico--apiserver--568ff5db89--pgwqw-eth0" Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.741 [INFO][6271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.749527 containerd[1541]: 2025-08-13 00:08:57.743 [INFO][6263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85" Aug 13 00:08:57.750130 containerd[1541]: time="2025-08-13T00:08:57.749574184Z" level=info msg="TearDown network for sandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\" successfully" Aug 13 00:08:57.752656 containerd[1541]: time="2025-08-13T00:08:57.752617778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:08:57.753050 containerd[1541]: time="2025-08-13T00:08:57.752690182Z" level=info msg="RemovePodSandbox \"8c65f20a579ed770edaf07c44adb8f5c86cc3f3bed02eac087ca5d62aa7f2a85\" returns successfully" Aug 13 00:08:57.753236 containerd[1541]: time="2025-08-13T00:08:57.753197807Z" level=info msg="StopPodSandbox for \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\"" Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.806 [WARNING][6289] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5wt4m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"edf56ce2-0695-4a38-a297-9fcd045b8bd5", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373", Pod:"csi-node-driver-5wt4m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7c481e3b16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.806 [INFO][6289] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.806 [INFO][6289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" iface="eth0" netns="" Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.806 [INFO][6289] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.806 [INFO][6289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.847 [INFO][6298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" HandleID="k8s-pod-network.cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.848 [INFO][6298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.848 [INFO][6298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.858 [WARNING][6298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" HandleID="k8s-pod-network.cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.858 [INFO][6298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" HandleID="k8s-pod-network.cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.860 [INFO][6298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.870567 containerd[1541]: 2025-08-13 00:08:57.863 [INFO][6289] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:57.870567 containerd[1541]: time="2025-08-13T00:08:57.870528642Z" level=info msg="TearDown network for sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\" successfully" Aug 13 00:08:57.870567 containerd[1541]: time="2025-08-13T00:08:57.870559564Z" level=info msg="StopPodSandbox for \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\" returns successfully" Aug 13 00:08:57.871799 containerd[1541]: time="2025-08-13T00:08:57.871763864Z" level=info msg="RemovePodSandbox for \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\"" Aug 13 00:08:57.871907 containerd[1541]: time="2025-08-13T00:08:57.871808507Z" level=info msg="Forcibly stopping sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\"" Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.912 [WARNING][6315] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5wt4m-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"edf56ce2-0695-4a38-a297-9fcd045b8bd5", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ae0236a3151895bfece18854776022cff3cf3c9e56874a429b31c70413555373", Pod:"csi-node-driver-5wt4m", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib7c481e3b16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.912 [INFO][6315] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.912 [INFO][6315] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" iface="eth0" netns="" Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.912 [INFO][6315] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.912 [INFO][6315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.936 [INFO][6323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" HandleID="k8s-pod-network.cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.936 [INFO][6323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.936 [INFO][6323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.948 [WARNING][6323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" HandleID="k8s-pod-network.cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.948 [INFO][6323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" HandleID="k8s-pod-network.cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Workload="localhost-k8s-csi--node--driver--5wt4m-eth0" Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.950 [INFO][6323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:57.958897 containerd[1541]: 2025-08-13 00:08:57.956 [INFO][6315] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621" Aug 13 00:08:57.959500 containerd[1541]: time="2025-08-13T00:08:57.958944339Z" level=info msg="TearDown network for sandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\" successfully" Aug 13 00:08:57.965045 containerd[1541]: time="2025-08-13T00:08:57.964993484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:08:57.965182 containerd[1541]: time="2025-08-13T00:08:57.965149812Z" level=info msg="RemovePodSandbox \"cc560b4de300afb2ca715be280b50b085417ea2885f415513d81d581a5084621\" returns successfully" Aug 13 00:08:57.965649 containerd[1541]: time="2025-08-13T00:08:57.965614516Z" level=info msg="StopPodSandbox for \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\"" Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.014 [WARNING][6341] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a308d1f0-5106-4066-87c2-bd682359b04c", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26", Pod:"goldmane-58fd7646b9-ntl8c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0ba470c20b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.014 [INFO][6341] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.014 [INFO][6341] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" iface="eth0" netns="" Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.014 [INFO][6341] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.014 [INFO][6341] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.034 [INFO][6350] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" HandleID="k8s-pod-network.7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.034 [INFO][6350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.034 [INFO][6350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.043 [WARNING][6350] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" HandleID="k8s-pod-network.7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.043 [INFO][6350] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" HandleID="k8s-pod-network.7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.045 [INFO][6350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:58.052268 containerd[1541]: 2025-08-13 00:08:58.047 [INFO][6341] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:58.052737 containerd[1541]: time="2025-08-13T00:08:58.052309700Z" level=info msg="TearDown network for sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\" successfully" Aug 13 00:08:58.052737 containerd[1541]: time="2025-08-13T00:08:58.052344582Z" level=info msg="StopPodSandbox for \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\" returns successfully" Aug 13 00:08:58.054837 containerd[1541]: time="2025-08-13T00:08:58.054099430Z" level=info msg="RemovePodSandbox for \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\"" Aug 13 00:08:58.054837 containerd[1541]: time="2025-08-13T00:08:58.054139512Z" level=info msg="Forcibly stopping sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\"" Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.097 [WARNING][6368] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"a308d1f0-5106-4066-87c2-bd682359b04c", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 0, 8, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53d3cf7a588ff4749d595e0760e076126537271181bcaac7a32e9dff1b652b26", Pod:"goldmane-58fd7646b9-ntl8c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0ba470c20b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.098 [INFO][6368] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.098 [INFO][6368] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" iface="eth0" netns="" Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.098 [INFO][6368] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.098 [INFO][6368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.125 [INFO][6376] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" HandleID="k8s-pod-network.7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.125 [INFO][6376] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.126 [INFO][6376] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.135 [WARNING][6376] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" HandleID="k8s-pod-network.7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.135 [INFO][6376] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" HandleID="k8s-pod-network.7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Workload="localhost-k8s-goldmane--58fd7646b9--ntl8c-eth0" Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.137 [INFO][6376] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 00:08:58.145553 containerd[1541]: 2025-08-13 00:08:58.143 [INFO][6368] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539" Aug 13 00:08:58.145553 containerd[1541]: time="2025-08-13T00:08:58.145532313Z" level=info msg="TearDown network for sandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\" successfully" Aug 13 00:08:58.152536 containerd[1541]: time="2025-08-13T00:08:58.152473780Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 00:08:58.152654 containerd[1541]: time="2025-08-13T00:08:58.152560984Z" level=info msg="RemovePodSandbox \"7d45e3bc318fe9b03419612f39bdf9ab2a98e137a40176179ed78a2da8369539\" returns successfully" Aug 13 00:08:58.287049 sshd[5939]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:58.296390 systemd[1]: Started sshd@15-10.0.0.72:22-10.0.0.1:33962.service - OpenSSH per-connection server daemon (10.0.0.1:33962). Aug 13 00:08:58.297852 systemd[1]: sshd@14-10.0.0.72:22-10.0.0.1:33954.service: Deactivated successfully. Aug 13 00:08:58.302032 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 00:08:58.305744 systemd-logind[1517]: Session 15 logged out. Waiting for processes to exit. Aug 13 00:08:58.311600 systemd-logind[1517]: Removed session 15. Aug 13 00:08:58.345738 sshd[6386]: Accepted publickey for core from 10.0.0.1 port 33962 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:08:58.347318 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:58.360919 systemd-logind[1517]: New session 16 of user core. Aug 13 00:08:58.368447 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 00:08:58.956229 sshd[6386]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:58.969398 systemd[1]: Started sshd@16-10.0.0.72:22-10.0.0.1:33974.service - OpenSSH per-connection server daemon (10.0.0.1:33974). Aug 13 00:08:58.970449 systemd[1]: sshd@15-10.0.0.72:22-10.0.0.1:33962.service: Deactivated successfully. Aug 13 00:08:58.973761 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 00:08:58.976297 systemd-logind[1517]: Session 16 logged out. Waiting for processes to exit. Aug 13 00:08:58.977532 systemd-logind[1517]: Removed session 16. Aug 13 00:08:59.006141 sshd[6403]: Accepted publickey for core from 10.0.0.1 port 33974 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:08:59.007473 sshd[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:08:59.011988 systemd-logind[1517]: New session 17 of user core. Aug 13 00:08:59.026394 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 00:08:59.147212 sshd[6403]: pam_unix(sshd:session): session closed for user core Aug 13 00:08:59.150757 systemd[1]: sshd@16-10.0.0.72:22-10.0.0.1:33974.service: Deactivated successfully. Aug 13 00:08:59.153262 systemd-logind[1517]: Session 17 logged out. Waiting for processes to exit. Aug 13 00:08:59.154016 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 00:08:59.155101 systemd-logind[1517]: Removed session 17. Aug 13 00:09:04.163391 systemd[1]: Started sshd@17-10.0.0.72:22-10.0.0.1:58280.service - OpenSSH per-connection server daemon (10.0.0.1:58280). Aug 13 00:09:04.202647 sshd[6433]: Accepted publickey for core from 10.0.0.1 port 58280 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:09:04.204051 sshd[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:04.208476 systemd-logind[1517]: New session 18 of user core. Aug 13 00:09:04.225590 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 00:09:04.363364 sshd[6433]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:04.367174 systemd[1]: sshd@17-10.0.0.72:22-10.0.0.1:58280.service: Deactivated successfully. Aug 13 00:09:04.370210 systemd-logind[1517]: Session 18 logged out. Waiting for processes to exit. Aug 13 00:09:04.370554 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 00:09:04.372002 systemd-logind[1517]: Removed session 18. Aug 13 00:09:09.377391 systemd[1]: Started sshd@18-10.0.0.72:22-10.0.0.1:58294.service - OpenSSH per-connection server daemon (10.0.0.1:58294). Aug 13 00:09:09.418997 sshd[6448]: Accepted publickey for core from 10.0.0.1 port 58294 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:09:09.420111 sshd[6448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:09.424722 systemd-logind[1517]: New session 19 of user core. Aug 13 00:09:09.431457 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 00:09:09.575489 sshd[6448]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:09.579440 systemd[1]: sshd@18-10.0.0.72:22-10.0.0.1:58294.service: Deactivated successfully. Aug 13 00:09:09.581569 systemd-logind[1517]: Session 19 logged out. Waiting for processes to exit. Aug 13 00:09:09.581575 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 00:09:09.582625 systemd-logind[1517]: Removed session 19. Aug 13 00:09:11.391744 kubelet[2615]: E0813 00:09:11.391698 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:09:12.391409 kubelet[2615]: E0813 00:09:12.391369 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:09:13.391268 kubelet[2615]: E0813 00:09:13.391164 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:09:14.392098 kubelet[2615]: E0813 00:09:14.391908 2615 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 13 00:09:14.591690 systemd[1]: Started sshd@19-10.0.0.72:22-10.0.0.1:33142.service - OpenSSH per-connection server daemon (10.0.0.1:33142). Aug 13 00:09:14.628297 sshd[6484]: Accepted publickey for core from 10.0.0.1 port 33142 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 13 00:09:14.629924 sshd[6484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 00:09:14.637383 systemd-logind[1517]: New session 20 of user core. Aug 13 00:09:14.648691 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 00:09:14.779368 sshd[6484]: pam_unix(sshd:session): session closed for user core Aug 13 00:09:14.782876 systemd[1]: sshd@19-10.0.0.72:22-10.0.0.1:33142.service: Deactivated successfully. Aug 13 00:09:14.785664 systemd-logind[1517]: Session 20 logged out. Waiting for processes to exit. Aug 13 00:09:14.786365 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 00:09:14.787767 systemd-logind[1517]: Removed session 20.