Aug 12 23:55:50.927476 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 12 23:55:50.927497 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Aug 12 22:21:53 -00 2025 Aug 12 23:55:50.927507 kernel: KASLR enabled Aug 12 23:55:50.927513 kernel: efi: EFI v2.7 by EDK II Aug 12 23:55:50.927518 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Aug 12 23:55:50.927524 kernel: random: crng init done Aug 12 23:55:50.927531 kernel: ACPI: Early table checksum verification disabled Aug 12 23:55:50.927537 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Aug 12 23:55:50.927543 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 12 23:55:50.927551 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:50.927557 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:50.927563 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:50.927569 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:50.927575 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:50.927582 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:50.927590 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:50.927597 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:50.927603 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:55:50.927609 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 12 23:55:50.927615 kernel: NUMA: Failed to initialise from firmware Aug 12 23:55:50.927622 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:55:50.927628 kernel: NUMA: NODE_DATA [mem 0xdc956800-0xdc95bfff] Aug 12 23:55:50.927634 kernel: Zone ranges: Aug 12 23:55:50.927641 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:55:50.927647 kernel: DMA32 empty Aug 12 23:55:50.927654 kernel: Normal empty Aug 12 23:55:50.927661 kernel: Movable zone start for each node Aug 12 23:55:50.927667 kernel: Early memory node ranges Aug 12 23:55:50.927673 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Aug 12 23:55:50.927680 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Aug 12 23:55:50.927694 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Aug 12 23:55:50.927700 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 12 23:55:50.927707 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 12 23:55:50.927713 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 12 23:55:50.927720 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 12 23:55:50.927726 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:55:50.927732 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 12 23:55:50.927741 kernel: psci: probing for conduit method from ACPI. Aug 12 23:55:50.927747 kernel: psci: PSCIv1.1 detected in firmware. Aug 12 23:55:50.927754 kernel: psci: Using standard PSCI v0.2 function IDs Aug 12 23:55:50.927763 kernel: psci: Trusted OS migration not required Aug 12 23:55:50.927769 kernel: psci: SMC Calling Convention v1.1 Aug 12 23:55:50.927776 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 12 23:55:50.927784 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Aug 12 23:55:50.927791 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Aug 12 23:55:50.927798 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 12 23:55:50.927805 kernel: Detected PIPT I-cache on CPU0 Aug 12 23:55:50.927811 kernel: CPU features: detected: GIC system register CPU interface Aug 12 23:55:50.927818 kernel: CPU features: detected: Hardware dirty bit management Aug 12 23:55:50.927825 kernel: CPU features: detected: Spectre-v4 Aug 12 23:55:50.927831 kernel: CPU features: detected: Spectre-BHB Aug 12 23:55:50.927838 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 12 23:55:50.927845 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 12 23:55:50.927853 kernel: CPU features: detected: ARM erratum 1418040 Aug 12 23:55:50.927859 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 12 23:55:50.927866 kernel: alternatives: applying boot alternatives Aug 12 23:55:50.927874 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 12 23:55:50.927881 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:55:50.927888 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:55:50.927894 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:55:50.927901 kernel: Fallback order for Node 0: 0 Aug 12 23:55:50.927908 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Aug 12 23:55:50.927914 kernel: Policy zone: DMA Aug 12 23:55:50.927921 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:55:50.927929 kernel: software IO TLB: area num 4. Aug 12 23:55:50.927936 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Aug 12 23:55:50.927943 kernel: Memory: 2386396K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185892K reserved, 0K cma-reserved) Aug 12 23:55:50.927950 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:55:50.927956 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:55:50.927964 kernel: rcu: RCU event tracing is enabled. Aug 12 23:55:50.927971 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:55:50.927978 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:55:50.927984 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:55:50.927991 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:55:50.927998 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:55:50.928006 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 12 23:55:50.928013 kernel: GICv3: 256 SPIs implemented Aug 12 23:55:50.928020 kernel: GICv3: 0 Extended SPIs implemented Aug 12 23:55:50.928027 kernel: Root IRQ handler: gic_handle_irq Aug 12 23:55:50.928033 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 12 23:55:50.928040 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 12 23:55:50.928055 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 12 23:55:50.928062 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Aug 12 23:55:50.928069 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Aug 12 23:55:50.928075 kernel: GICv3: using LPI property table @0x00000000400f0000 Aug 12 23:55:50.928082 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Aug 12 23:55:50.928089 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:55:50.928098 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:55:50.928105 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 12 23:55:50.928112 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 12 23:55:50.928119 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 12 23:55:50.928126 kernel: arm-pv: using stolen time PV Aug 12 23:55:50.928133 kernel: Console: colour dummy device 80x25 Aug 12 23:55:50.928140 kernel: ACPI: Core revision 20230628 Aug 12 23:55:50.928147 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 12 23:55:50.928154 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:55:50.928161 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 12 23:55:50.928169 kernel: landlock: Up and running. Aug 12 23:55:50.928176 kernel: SELinux: Initializing. Aug 12 23:55:50.928183 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:55:50.928190 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:55:50.928197 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:55:50.928204 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:55:50.928211 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:55:50.928218 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:55:50.928225 kernel: Platform MSI: ITS@0x8080000 domain created Aug 12 23:55:50.928233 kernel: PCI/MSI: ITS@0x8080000 domain created Aug 12 23:55:50.928240 kernel: Remapping and enabling EFI services. Aug 12 23:55:50.928247 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:55:50.928254 kernel: Detected PIPT I-cache on CPU1 Aug 12 23:55:50.928261 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 12 23:55:50.928267 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Aug 12 23:55:50.928274 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:55:50.928281 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 12 23:55:50.928288 kernel: Detected PIPT I-cache on CPU2 Aug 12 23:55:50.928295 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 12 23:55:50.928303 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Aug 12 23:55:50.928310 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:55:50.928321 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 12 23:55:50.928330 kernel: Detected PIPT I-cache on CPU3 Aug 12 23:55:50.928337 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 12 23:55:50.928345 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Aug 12 23:55:50.928352 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:55:50.928359 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 12 23:55:50.928366 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:55:50.928375 kernel: SMP: Total of 4 processors activated. Aug 12 23:55:50.928382 kernel: CPU features: detected: 32-bit EL0 Support Aug 12 23:55:50.928389 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 12 23:55:50.928396 kernel: CPU features: detected: Common not Private translations Aug 12 23:55:50.928404 kernel: CPU features: detected: CRC32 instructions Aug 12 23:55:50.928411 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 12 23:55:50.928418 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 12 23:55:50.928425 kernel: CPU features: detected: LSE atomic instructions Aug 12 23:55:50.928434 kernel: CPU features: detected: Privileged Access Never Aug 12 23:55:50.928441 kernel: CPU features: detected: RAS Extension Support Aug 12 23:55:50.928448 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 12 23:55:50.928455 kernel: CPU: All CPU(s) started at EL1 Aug 12 23:55:50.928463 kernel: alternatives: applying system-wide alternatives Aug 12 23:55:50.928470 kernel: devtmpfs: initialized Aug 12 23:55:50.928477 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:55:50.928485 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:55:50.928492 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:55:50.928500 kernel: SMBIOS 3.0.0 present. Aug 12 23:55:50.928508 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Aug 12 23:55:50.928515 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:55:50.928522 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 12 23:55:50.928530 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 12 23:55:50.928537 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 12 23:55:50.928544 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:55:50.928552 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Aug 12 23:55:50.928559 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:55:50.928567 kernel: cpuidle: using governor menu Aug 12 23:55:50.928575 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 12 23:55:50.928582 kernel: ASID allocator initialised with 32768 entries Aug 12 23:55:50.928589 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:55:50.928596 kernel: Serial: AMBA PL011 UART driver Aug 12 23:55:50.928604 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 12 23:55:50.928611 kernel: Modules: 0 pages in range for non-PLT usage Aug 12 23:55:50.928618 kernel: Modules: 509008 pages in range for PLT usage Aug 12 23:55:50.928625 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:55:50.928634 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:55:50.928641 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 12 23:55:50.928648 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 12 23:55:50.928655 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:55:50.928663 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:55:50.928670 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 12 23:55:50.928701 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 12 23:55:50.928711 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:55:50.928718 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:55:50.928728 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:55:50.928735 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:55:50.928742 kernel: ACPI: Interpreter enabled Aug 12 23:55:50.928750 kernel: ACPI: Using GIC for interrupt routing Aug 12 23:55:50.928757 kernel: ACPI: MCFG table detected, 1 entries Aug 12 23:55:50.928764 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 12 23:55:50.928771 kernel: printk: console [ttyAMA0] enabled Aug 12 23:55:50.928778 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:55:50.928916 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:55:50.928995 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 12 23:55:50.929087 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 12 23:55:50.929156 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 12 23:55:50.929219 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 12 23:55:50.929229 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 12 23:55:50.929236 kernel: PCI host bridge to bus 0000:00 Aug 12 23:55:50.929309 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 12 23:55:50.929373 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 12 23:55:50.929434 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 12 23:55:50.929492 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:55:50.929572 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Aug 12 23:55:50.929775 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Aug 12 23:55:50.929853 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Aug 12 23:55:50.929929 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Aug 12 23:55:50.929998 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:55:50.930103 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:55:50.930173 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Aug 12 23:55:50.930239 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Aug 12 23:55:50.930303 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 12 23:55:50.930375 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 12 23:55:50.930439 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 12 23:55:50.930469 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 12 23:55:50.930477 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 12 23:55:50.930484 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 12 23:55:50.930492 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 12 23:55:50.930500 kernel: iommu: Default domain type: Translated Aug 12 23:55:50.930507 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 12 23:55:50.930515 kernel: efivars: Registered efivars operations Aug 12 23:55:50.930522 kernel: vgaarb: loaded Aug 12 23:55:50.930532 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 12 23:55:50.930539 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:55:50.930547 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:55:50.930554 kernel: pnp: PnP ACPI init Aug 12 23:55:50.930633 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 12 23:55:50.930644 kernel: pnp: PnP ACPI: found 1 devices Aug 12 23:55:50.930651 kernel: NET: Registered PF_INET protocol family Aug 12 23:55:50.930658 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:55:50.930668 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:55:50.930675 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:55:50.930690 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:55:50.930698 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:55:50.931132 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:55:50.931147 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:55:50.931295 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:55:50.931304 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:55:50.931311 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:55:50.931326 kernel: kvm [1]: HYP mode not available Aug 12 23:55:50.931336 kernel: Initialise system trusted keyrings Aug 12 23:55:50.931344 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:55:50.931351 kernel: Key type asymmetric registered Aug 12 23:55:50.931359 kernel: Asymmetric key parser 'x509' registered Aug 12 23:55:50.931368 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 12 23:55:50.931378 kernel: io scheduler mq-deadline registered Aug 12 23:55:50.931387 kernel: io scheduler kyber registered Aug 12 23:55:50.931395 kernel: io scheduler bfq registered Aug 12 23:55:50.931404 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 12 23:55:50.931412 kernel: ACPI: button: Power Button [PWRB] Aug 12 23:55:50.931420 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 12 23:55:50.931592 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 12 23:55:50.931607 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:55:50.931615 kernel: thunder_xcv, ver 1.0 Aug 12 23:55:50.931622 kernel: thunder_bgx, ver 1.0 Aug 12 23:55:50.931630 kernel: nicpf, ver 1.0 Aug 12 23:55:50.931637 kernel: nicvf, ver 1.0 Aug 12 23:55:50.931772 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 12 23:55:50.931840 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-12T23:55:50 UTC (1755042950) Aug 12 23:55:50.931850 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 12 23:55:50.931858 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Aug 12 23:55:50.931866 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 12 23:55:50.931873 kernel: watchdog: Hard watchdog permanently disabled Aug 12 23:55:50.931881 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:55:50.931888 kernel: Segment Routing with IPv6 Aug 12 23:55:50.931899 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:55:50.931906 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:55:50.931913 kernel: Key type dns_resolver registered Aug 12 23:55:50.931921 kernel: registered taskstats version 1 Aug 12 23:55:50.931928 kernel: Loading compiled-in X.509 certificates Aug 12 23:55:50.931935 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 7263800c6d21650660e2b030c1023dce09b1e8b6' Aug 12 23:55:50.931943 kernel: Key type .fscrypt registered Aug 12 23:55:50.931950 kernel: Key type fscrypt-provisioning registered Aug 12 23:55:50.931957 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:55:50.931966 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:55:50.931974 kernel: ima: No architecture policies found Aug 12 23:55:50.931981 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 12 23:55:50.931989 kernel: clk: Disabling unused clocks Aug 12 23:55:50.931996 kernel: Freeing unused kernel memory: 39424K Aug 12 23:55:50.932004 kernel: Run /init as init process Aug 12 23:55:50.932011 kernel: with arguments: Aug 12 23:55:50.932018 kernel: /init Aug 12 23:55:50.932025 kernel: with environment: Aug 12 23:55:50.932034 kernel: HOME=/ Aug 12 23:55:50.932041 kernel: TERM=linux Aug 12 23:55:50.932061 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:55:50.932070 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 12 23:55:50.932079 systemd[1]: Detected virtualization kvm. Aug 12 23:55:50.932088 systemd[1]: Detected architecture arm64. Aug 12 23:55:50.932095 systemd[1]: Running in initrd. Aug 12 23:55:50.932105 systemd[1]: No hostname configured, using default hostname. Aug 12 23:55:50.932112 systemd[1]: Hostname set to . Aug 12 23:55:50.932120 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:55:50.932128 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:55:50.932136 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:55:50.932144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:55:50.932153 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:55:50.932161 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:55:50.932171 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:55:50.932179 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:55:50.932188 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:55:50.932197 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:55:50.932205 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:55:50.932212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:55:50.932220 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:55:50.932230 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:55:50.932238 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:55:50.932246 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:55:50.932253 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:55:50.932262 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:55:50.932269 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:55:50.932277 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 12 23:55:50.932285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:55:50.932293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:55:50.932303 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:55:50.932310 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:55:50.932318 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:55:50.932326 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:55:50.932334 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:55:50.932342 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:55:50.932350 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:55:50.932358 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:55:50.932367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:55:50.932376 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:55:50.932384 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:55:50.932391 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:55:50.932418 systemd-journald[236]: Collecting audit messages is disabled. Aug 12 23:55:50.932439 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:55:50.932447 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:55:50.932455 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:55:50.932464 systemd-journald[236]: Journal started Aug 12 23:55:50.932484 systemd-journald[236]: Runtime Journal (/run/log/journal/e2bbdb97e94f49af9e7d5be63dd8868f) is 5.9M, max 47.3M, 41.4M free. Aug 12 23:55:50.917975 systemd-modules-load[238]: Inserted module 'overlay' Aug 12 23:55:50.935239 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:55:50.936474 systemd-modules-load[238]: Inserted module 'br_netfilter' Aug 12 23:55:50.937802 kernel: Bridge firewalling registered Aug 12 23:55:50.937820 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:55:50.938916 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:55:50.940067 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:55:50.944356 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:55:50.945921 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:55:50.948845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:55:50.956317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:55:50.960400 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:55:50.961687 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:55:50.963852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:55:50.972239 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:55:50.974337 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:55:50.982084 dracut-cmdline[276]: dracut-dracut-053 Aug 12 23:55:50.984720 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2f9df6e9e6c671c457040a64675390bbff42294b08c628cd2dc472ed8120146a Aug 12 23:55:51.009784 systemd-resolved[278]: Positive Trust Anchors: Aug 12 23:55:51.009802 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:55:51.009833 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:55:51.014915 systemd-resolved[278]: Defaulting to hostname 'linux'. Aug 12 23:55:51.017140 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:55:51.018701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:55:51.058076 kernel: SCSI subsystem initialized Aug 12 23:55:51.064068 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:55:51.072073 kernel: iscsi: registered transport (tcp) Aug 12 23:55:51.086461 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:55:51.086485 kernel: QLogic iSCSI HBA Driver Aug 12 23:55:51.135835 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:55:51.148287 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:55:51.166757 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:55:51.166830 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:55:51.166842 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 12 23:55:51.225085 kernel: raid6: neonx8 gen() 15776 MB/s Aug 12 23:55:51.242062 kernel: raid6: neonx4 gen() 15653 MB/s Aug 12 23:55:51.259063 kernel: raid6: neonx2 gen() 13186 MB/s Aug 12 23:55:51.276064 kernel: raid6: neonx1 gen() 10485 MB/s Aug 12 23:55:51.293063 kernel: raid6: int64x8 gen() 6962 MB/s Aug 12 23:55:51.310060 kernel: raid6: int64x4 gen() 7343 MB/s Aug 12 23:55:51.327091 kernel: raid6: int64x2 gen() 6130 MB/s Aug 12 23:55:51.344066 kernel: raid6: int64x1 gen() 5055 MB/s Aug 12 23:55:51.344107 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Aug 12 23:55:51.361085 kernel: raid6: .... xor() 11928 MB/s, rmw enabled Aug 12 23:55:51.361131 kernel: raid6: using neon recovery algorithm Aug 12 23:55:51.370091 kernel: xor: measuring software checksum speed Aug 12 23:55:51.370225 kernel: 8regs : 19769 MB/sec Aug 12 23:55:51.370247 kernel: 32regs : 18186 MB/sec Aug 12 23:55:51.371204 kernel: arm64_neon : 25544 MB/sec Aug 12 23:55:51.371231 kernel: xor: using function: arm64_neon (25544 MB/sec) Aug 12 23:55:51.425081 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:55:51.437116 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:55:51.448856 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:55:51.465483 systemd-udevd[461]: Using default interface naming scheme 'v255'. Aug 12 23:55:51.468802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:55:51.488336 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:55:51.501862 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Aug 12 23:55:51.532139 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:55:51.540301 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:55:51.583596 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:55:51.592137 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:55:51.607464 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:55:51.608847 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:55:51.611403 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:55:51.613574 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:55:51.622258 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:55:51.632452 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 12 23:55:51.632642 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:55:51.635412 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:55:51.635549 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:55:51.638620 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:55:51.640764 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:55:51.640922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:55:51.642926 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:55:51.648304 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:55:51.648333 kernel: GPT:9289727 != 19775487 Aug 12 23:55:51.648343 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:55:51.648353 kernel: GPT:9289727 != 19775487 Aug 12 23:55:51.649063 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:55:51.649077 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:55:51.650333 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:55:51.651763 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:55:51.661453 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:55:51.664853 kernel: BTRFS: device fsid 03408483-5051-409a-aab4-4e6d5027e982 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (510) Aug 12 23:55:51.671239 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:55:51.675084 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (523) Aug 12 23:55:51.676081 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:55:51.690456 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:55:51.694561 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:55:51.695510 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:55:51.697893 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:55:51.703356 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:55:51.718275 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:55:51.724585 disk-uuid[562]: Primary Header is updated. Aug 12 23:55:51.724585 disk-uuid[562]: Secondary Entries is updated. Aug 12 23:55:51.724585 disk-uuid[562]: Secondary Header is updated. Aug 12 23:55:51.729082 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:55:51.743076 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:55:51.746077 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:55:52.748074 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:55:52.748718 disk-uuid[563]: The operation has completed successfully. Aug 12 23:55:52.771891 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:55:52.772010 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:55:52.796243 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:55:52.799386 sh[576]: Success Aug 12 23:55:52.815099 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 12 23:55:52.858722 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:55:52.860432 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:55:52.862095 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:55:52.873389 kernel: BTRFS info (device dm-0): first mount of filesystem 03408483-5051-409a-aab4-4e6d5027e982 Aug 12 23:55:52.873434 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:55:52.873446 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 12 23:55:52.875366 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 12 23:55:52.875384 kernel: BTRFS info (device dm-0): using free space tree Aug 12 23:55:52.879141 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:55:52.880312 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:55:52.891242 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:55:52.892668 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:55:52.900438 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:55:52.900496 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:55:52.900507 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:55:52.903189 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:55:52.910855 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 12 23:55:52.912447 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:55:52.927176 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:55:52.935242 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:55:52.998988 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:55:53.011234 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:55:53.054643 systemd-networkd[762]: lo: Link UP Aug 12 23:55:53.054656 systemd-networkd[762]: lo: Gained carrier Aug 12 23:55:53.055419 systemd-networkd[762]: Enumeration completed Aug 12 23:55:53.055548 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:55:53.056125 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:55:53.056129 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:55:53.056864 systemd-networkd[762]: eth0: Link UP Aug 12 23:55:53.056867 systemd-networkd[762]: eth0: Gained carrier Aug 12 23:55:53.056874 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:55:53.057767 systemd[1]: Reached target network.target - Network. Aug 12 23:55:53.065148 ignition[679]: Ignition 2.19.0 Aug 12 23:55:53.065155 ignition[679]: Stage: fetch-offline Aug 12 23:55:53.065195 ignition[679]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:53.065203 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:55:53.065420 ignition[679]: parsed url from cmdline: "" Aug 12 23:55:53.065424 ignition[679]: no config URL provided Aug 12 23:55:53.065428 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:55:53.065435 ignition[679]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:55:53.065460 ignition[679]: op(1): [started] loading QEMU firmware config module Aug 12 23:55:53.065464 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:55:53.076770 ignition[679]: op(1): [finished] loading QEMU firmware config module Aug 12 23:55:53.080108 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:55:53.117849 ignition[679]: parsing config with SHA512: 2933ef20a2c286ebbe636a7254d956ae40c064a289929dcc8016575165cdebf8a1c9414e6fc9f3364bbf824700c72dab6df92a9ea07f22afed42497bb852a293 Aug 12 23:55:53.123608 unknown[679]: fetched base config from "system" Aug 12 23:55:53.123624 unknown[679]: fetched user config from "qemu" Aug 12 23:55:53.124451 ignition[679]: fetch-offline: fetch-offline passed Aug 12 23:55:53.126571 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:55:53.124526 ignition[679]: Ignition finished successfully Aug 12 23:55:53.127953 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:55:53.142305 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:55:53.154036 ignition[773]: Ignition 2.19.0 Aug 12 23:55:53.154136 ignition[773]: Stage: kargs Aug 12 23:55:53.154319 ignition[773]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:53.154329 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:55:53.155216 ignition[773]: kargs: kargs passed Aug 12 23:55:53.155261 ignition[773]: Ignition finished successfully Aug 12 23:55:53.157467 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:55:53.163258 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:55:53.174469 ignition[782]: Ignition 2.19.0 Aug 12 23:55:53.174480 ignition[782]: Stage: disks Aug 12 23:55:53.174658 ignition[782]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:53.174668 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:55:53.175579 ignition[782]: disks: disks passed Aug 12 23:55:53.177041 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:55:53.175626 ignition[782]: Ignition finished successfully Aug 12 23:55:53.179144 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:55:53.180156 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:55:53.181638 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:55:53.183118 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:55:53.184471 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:55:53.193260 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:55:53.203679 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 12 23:55:53.207872 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:55:53.214263 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:55:53.255146 kernel: EXT4-fs (vda9): mounted filesystem 128aec8b-f05d-48ed-8996-c9e8b21a7810 r/w with ordered data mode. Quota mode: none. Aug 12 23:55:53.255619 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:55:53.256845 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:55:53.265170 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:55:53.267158 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:55:53.267979 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 12 23:55:53.268022 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:55:53.268057 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:55:53.274589 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:55:53.276713 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:55:53.280058 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (800) Aug 12 23:55:53.281074 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:55:53.281090 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:55:53.282306 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:55:53.285062 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:55:53.286123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:55:53.324586 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:55:53.328018 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:55:53.332381 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:55:53.336410 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:55:53.412867 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:55:53.421202 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:55:53.422734 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:55:53.428062 kernel: BTRFS info (device vda6): last unmount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:55:53.448352 ignition[914]: INFO : Ignition 2.19.0 Aug 12 23:55:53.448352 ignition[914]: INFO : Stage: mount Aug 12 23:55:53.450556 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:53.450556 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:55:53.450556 ignition[914]: INFO : mount: mount passed Aug 12 23:55:53.450556 ignition[914]: INFO : Ignition finished successfully Aug 12 23:55:53.450384 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:55:53.454072 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:55:53.467175 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:55:53.872656 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:55:53.881277 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:55:53.887066 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (927) Aug 12 23:55:53.889136 kernel: BTRFS info (device vda6): first mount of filesystem dbce4b09-c4b8-4cc9-bd11-416717f60c7d Aug 12 23:55:53.889169 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:55:53.889180 kernel: BTRFS info (device vda6): using free space tree Aug 12 23:55:53.892068 kernel: BTRFS info (device vda6): auto enabling async discard Aug 12 23:55:53.893140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:55:53.911162 ignition[944]: INFO : Ignition 2.19.0 Aug 12 23:55:53.912568 ignition[944]: INFO : Stage: files Aug 12 23:55:53.912568 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:53.912568 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:55:53.914827 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:55:53.915762 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:55:53.915762 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:55:53.919036 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:55:53.920073 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:55:53.920073 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:55:53.919651 unknown[944]: wrote ssh authorized keys file for user: core Aug 12 23:55:53.923115 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 12 23:55:53.923115 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 12 23:55:53.973946 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:55:54.341506 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 12 23:55:54.343014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:55:54.343014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:55:54.343014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:55:54.343014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:55:54.343014 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:55:54.349528 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:55:54.349528 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:55:54.349528 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:55:54.349528 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:55:54.349528 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:55:54.349528 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:55:54.349528 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:55:54.349528 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:55:54.349528 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Aug 12 23:55:54.707216 systemd-networkd[762]: eth0: Gained IPv6LL Aug 12 23:55:54.815521 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 12 23:55:55.214732 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Aug 12 23:55:55.214732 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 12 23:55:55.217912 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:55:55.217912 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:55:55.217912 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 12 23:55:55.217912 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 12 23:55:55.217912 ignition[944]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:55:55.217912 ignition[944]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:55:55.217912 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 12 23:55:55.217912 ignition[944]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:55:55.249891 ignition[944]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:55:55.254333 ignition[944]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:55:55.255902 ignition[944]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:55:55.255902 ignition[944]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:55:55.255902 ignition[944]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:55:55.255902 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:55:55.255902 ignition[944]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:55:55.255902 ignition[944]: INFO : files: files passed Aug 12 23:55:55.255902 ignition[944]: INFO : Ignition finished successfully Aug 12 23:55:55.256586 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:55:55.271275 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:55:55.275849 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:55:55.277946 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:55:55.278040 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:55:55.285747 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Aug 12 23:55:55.289109 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:55:55.289109 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:55:55.292071 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:55:55.294109 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:55:55.296416 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:55:55.307230 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:55:55.331026 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:55:55.333099 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:55:55.334617 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:55:55.335915 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:55:55.337339 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:55:55.340843 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:55:55.356885 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:55:55.375295 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:55:55.384371 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:55:55.385709 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:55:55.387369 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:55:55.388907 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:55:55.390913 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:55:55.392980 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:55:55.394270 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:55:55.395746 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:55:55.397150 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:55:55.398727 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:55:55.400376 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:55:55.401829 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:55:55.403411 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:55:55.404895 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:55:55.406380 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:55:55.407709 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:55:55.407848 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:55:55.409737 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:55:55.411658 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:55:55.413124 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:55:55.414151 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:55:55.415606 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:55:55.415758 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:55:55.418017 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:55:55.418185 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:55:55.419900 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:55:55.421203 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:55:55.425107 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:55:55.426240 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:55:55.428635 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:55:55.430147 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:55:55.430249 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:55:55.431651 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:55:55.431813 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:55:55.433271 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:55:55.433393 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:55:55.435040 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:55:55.435171 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:55:55.454339 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:55:55.455102 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:55:55.455307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:55:55.458316 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:55:55.458997 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:55:55.459141 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:55:55.460767 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:55:55.460924 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:55:55.467232 ignition[998]: INFO : Ignition 2.19.0 Aug 12 23:55:55.467232 ignition[998]: INFO : Stage: umount Aug 12 23:55:55.468830 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:55:55.468830 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:55:55.468830 ignition[998]: INFO : umount: umount passed Aug 12 23:55:55.468830 ignition[998]: INFO : Ignition finished successfully Aug 12 23:55:55.467808 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:55:55.468526 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:55:55.470330 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:55:55.470422 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:55:55.472169 systemd[1]: Stopped target network.target - Network. Aug 12 23:55:55.473299 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:55:55.473383 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:55:55.474767 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:55:55.474815 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:55:55.476787 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:55:55.476836 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:55:55.478282 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:55:55.478325 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:55:55.479973 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:55:55.481940 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:55:55.484122 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:55:55.489752 systemd-networkd[762]: eth0: DHCPv6 lease lost Aug 12 23:55:55.491502 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:55:55.491619 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:55:55.494365 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:55:55.494558 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:55:55.498421 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:55:55.498482 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:55:55.507233 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:55:55.507974 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:55:55.508041 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:55:55.509592 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:55:55.509634 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:55:55.510952 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:55:55.510993 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:55:55.512691 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:55:55.512739 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:55:55.514424 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:55:55.526724 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:55:55.526846 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:55:55.530772 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:55:55.530922 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:55:55.532901 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:55:55.533062 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:55:55.536861 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:55:55.536931 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:55:55.538495 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:55:55.538526 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:55:55.539851 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:55:55.539898 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:55:55.542027 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:55:55.542098 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:55:55.544588 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:55:55.544647 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:55:55.547022 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:55:55.547085 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:55:55.559284 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:55:55.560165 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:55:55.560229 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:55:55.561999 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:55:55.562044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:55:55.564968 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:55:55.565099 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:55:55.566450 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:55:55.568524 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:55:55.579163 systemd[1]: Switching root. Aug 12 23:55:55.609316 systemd-journald[236]: Journal stopped Aug 12 23:55:56.370315 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Aug 12 23:55:56.370380 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:55:56.370394 kernel: SELinux: policy capability open_perms=1 Aug 12 23:55:56.370408 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:55:56.370424 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:55:56.370448 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:55:56.370459 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:55:56.370470 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:55:56.370480 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:55:56.370491 kernel: audit: type=1403 audit(1755042955.771:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:55:56.370503 systemd[1]: Successfully loaded SELinux policy in 32.889ms. Aug 12 23:55:56.370524 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.733ms. Aug 12 23:55:56.370538 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 12 23:55:56.370551 systemd[1]: Detected virtualization kvm. Aug 12 23:55:56.370562 systemd[1]: Detected architecture arm64. Aug 12 23:55:56.370574 systemd[1]: Detected first boot. Aug 12 23:55:56.370587 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:55:56.370598 zram_generator::config[1042]: No configuration found. Aug 12 23:55:56.370611 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:55:56.370622 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:55:56.370635 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:55:56.370647 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:55:56.370659 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:55:56.370679 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:55:56.370692 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:55:56.370703 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:55:56.370715 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:55:56.370726 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:55:56.370738 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:55:56.370751 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:55:56.370763 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:55:56.370774 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:55:56.370786 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:55:56.370798 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:55:56.370810 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:55:56.370822 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:55:56.370834 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 12 23:55:56.370845 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:55:56.370858 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:55:56.370869 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:55:56.370881 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:55:56.370892 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:55:56.370908 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:55:56.370920 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:55:56.370932 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:55:56.370943 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:55:56.370957 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:55:56.370969 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:55:56.370983 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:55:56.370995 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:55:56.371006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:55:56.371018 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:55:56.371030 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:55:56.371041 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:55:56.371060 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:55:56.371075 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:55:56.371087 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:55:56.371099 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:55:56.371110 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:55:56.371122 systemd[1]: Reached target machines.target - Containers. Aug 12 23:55:56.371133 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:55:56.371145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:55:56.371157 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:55:56.371170 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:55:56.371182 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:55:56.371193 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:55:56.371205 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:55:56.371216 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:55:56.371228 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:55:56.371241 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:55:56.371253 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:55:56.371265 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:55:56.371276 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:55:56.371288 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:55:56.371300 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:55:56.371312 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:55:56.371322 kernel: fuse: init (API version 7.39) Aug 12 23:55:56.371333 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:55:56.371344 kernel: ACPI: bus type drm_connector registered Aug 12 23:55:56.371354 kernel: loop: module loaded Aug 12 23:55:56.371366 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:55:56.371379 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:55:56.371390 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:55:56.371402 systemd[1]: Stopped verity-setup.service. Aug 12 23:55:56.371414 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:55:56.371425 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:55:56.371436 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:55:56.371447 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:55:56.371458 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:55:56.371471 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:55:56.371485 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:55:56.371496 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:55:56.371508 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:55:56.371542 systemd-journald[1106]: Collecting audit messages is disabled. Aug 12 23:55:56.371567 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:55:56.371579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:55:56.371591 systemd-journald[1106]: Journal started Aug 12 23:55:56.371614 systemd-journald[1106]: Runtime Journal (/run/log/journal/e2bbdb97e94f49af9e7d5be63dd8868f) is 5.9M, max 47.3M, 41.4M free. Aug 12 23:55:56.160302 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:55:56.176890 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:55:56.177280 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:55:56.374647 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:55:56.375560 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:55:56.375750 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:55:56.377174 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:55:56.377328 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:55:56.378523 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:55:56.378643 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:55:56.379860 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:55:56.380005 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:55:56.381306 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:55:56.382450 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:55:56.383814 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:55:56.396697 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:55:56.405192 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:55:56.407406 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:55:56.408394 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:55:56.408432 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:55:56.410466 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 12 23:55:56.412734 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:55:56.414947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:55:56.416004 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:55:56.417660 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:55:56.419663 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:55:56.420732 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:55:56.423274 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:55:56.424380 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:55:56.425530 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:55:56.430506 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:55:56.433202 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:55:56.433972 systemd-journald[1106]: Time spent on flushing to /var/log/journal/e2bbdb97e94f49af9e7d5be63dd8868f is 14.053ms for 854 entries. Aug 12 23:55:56.433972 systemd-journald[1106]: System Journal (/var/log/journal/e2bbdb97e94f49af9e7d5be63dd8868f) is 8.0M, max 195.6M, 187.6M free. Aug 12 23:55:56.542760 systemd-journald[1106]: Received client request to flush runtime journal. Aug 12 23:55:56.542836 kernel: loop0: detected capacity change from 0 to 114432 Aug 12 23:55:56.542864 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:55:56.542878 kernel: loop1: detected capacity change from 0 to 114328 Aug 12 23:55:56.542892 kernel: loop2: detected capacity change from 0 to 203944 Aug 12 23:55:56.436180 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:55:56.437946 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:55:56.439360 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:55:56.441355 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:55:56.453532 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:55:56.459381 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 12 23:55:56.463093 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:55:56.477162 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 12 23:55:56.494126 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:55:56.504301 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:55:56.519782 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Aug 12 23:55:56.519793 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Aug 12 23:55:56.523851 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:55:56.544097 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:55:56.545573 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:55:56.551727 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:55:56.562877 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 12 23:55:56.566142 kernel: loop3: detected capacity change from 0 to 114432 Aug 12 23:55:56.576082 kernel: loop4: detected capacity change from 0 to 114328 Aug 12 23:55:56.584090 kernel: loop5: detected capacity change from 0 to 203944 Aug 12 23:55:56.584123 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:55:56.587095 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 12 23:55:56.589415 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 12 23:55:56.590236 (sd-merge)[1176]: Merged extensions into '/usr'. Aug 12 23:55:56.593919 systemd[1]: Reloading requested from client PID 1152 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:55:56.593939 systemd[1]: Reloading... Aug 12 23:55:56.665937 zram_generator::config[1203]: No configuration found. Aug 12 23:55:56.743088 ldconfig[1147]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:55:56.781875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:55:56.820014 systemd[1]: Reloading finished in 225 ms. Aug 12 23:55:56.854743 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:55:56.857469 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:55:56.868302 systemd[1]: Starting ensure-sysext.service... Aug 12 23:55:56.870128 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:55:56.880600 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:55:56.880618 systemd[1]: Reloading... Aug 12 23:55:56.893578 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:55:56.894215 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:55:56.894975 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:55:56.895302 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Aug 12 23:55:56.895450 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Aug 12 23:55:56.898498 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:55:56.898609 systemd-tmpfiles[1238]: Skipping /boot Aug 12 23:55:56.906613 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:55:56.906768 systemd-tmpfiles[1238]: Skipping /boot Aug 12 23:55:56.930078 zram_generator::config[1268]: No configuration found. Aug 12 23:55:57.018728 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:55:57.055025 systemd[1]: Reloading finished in 174 ms. Aug 12 23:55:57.074899 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:55:57.083507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:55:57.093253 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 12 23:55:57.096377 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:55:57.098741 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:55:57.103361 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:55:57.111437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:55:57.119274 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:55:57.122484 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:55:57.126707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:55:57.138556 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:55:57.144339 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:55:57.145858 systemd-udevd[1307]: Using default interface naming scheme 'v255'. Aug 12 23:55:57.146561 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:55:57.147584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:55:57.150580 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:55:57.154559 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:55:57.158318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:55:57.158477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:55:57.162379 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:55:57.162513 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:55:57.174367 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:55:57.174598 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:55:57.176263 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:55:57.178127 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:55:57.183682 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:55:57.190019 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:55:57.202269 systemd[1]: Finished ensure-sysext.service. Aug 12 23:55:57.209528 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:55:57.211470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:55:57.220340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:55:57.223953 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:55:57.228274 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:55:57.234116 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:55:57.235280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:55:57.244285 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:55:57.249223 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:55:57.250120 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:55:57.264770 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 12 23:55:57.269246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:55:57.269414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:55:57.270937 augenrules[1350]: No rules Aug 12 23:55:57.272588 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 12 23:55:57.274078 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1343) Aug 12 23:55:57.287002 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:55:57.289276 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:55:57.290923 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:55:57.291481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:55:57.306188 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:55:57.307988 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:55:57.309191 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:55:57.315293 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:55:57.316195 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:55:57.316271 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:55:57.338891 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:55:57.376483 systemd-resolved[1305]: Positive Trust Anchors: Aug 12 23:55:57.387193 systemd-resolved[1305]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:55:57.387239 systemd-resolved[1305]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:55:57.397373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:55:57.398415 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:55:57.398903 systemd-resolved[1305]: Defaulting to hostname 'linux'. Aug 12 23:55:57.401409 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:55:57.404431 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:55:57.405421 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:55:57.416023 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 12 23:55:57.432374 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 12 23:55:57.434138 systemd-networkd[1365]: lo: Link UP Aug 12 23:55:57.434145 systemd-networkd[1365]: lo: Gained carrier Aug 12 23:55:57.435124 systemd-networkd[1365]: Enumeration completed Aug 12 23:55:57.435196 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:55:57.436157 systemd[1]: Reached target network.target - Network. Aug 12 23:55:57.437925 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:55:57.437935 systemd-networkd[1365]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:55:57.438756 systemd-networkd[1365]: eth0: Link UP Aug 12 23:55:57.438764 systemd-networkd[1365]: eth0: Gained carrier Aug 12 23:55:57.438780 systemd-networkd[1365]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:55:57.443693 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:55:57.454213 lvm[1392]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:55:57.462147 systemd-networkd[1365]: eth0: DHCPv4 address 10.0.0.26/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:55:57.463303 systemd-timesyncd[1366]: Network configuration changed, trying to establish connection. Aug 12 23:55:57.894484 systemd-timesyncd[1366]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:55:57.894489 systemd-resolved[1305]: Clock change detected. Flushing caches. Aug 12 23:55:57.894536 systemd-timesyncd[1366]: Initial clock synchronization to Tue 2025-08-12 23:55:57.894376 UTC. Aug 12 23:55:57.897612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:55:57.913650 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 12 23:55:57.914941 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:55:57.916893 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:55:57.917883 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:55:57.918917 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:55:57.920074 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:55:57.921088 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:55:57.922066 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:55:57.922944 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:55:57.922990 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:55:57.923714 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:55:57.925851 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:55:57.928180 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:55:57.946231 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:55:57.948778 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 12 23:55:57.950431 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:55:57.951401 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:55:57.952125 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:55:57.952846 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:55:57.952877 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:55:57.953996 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:55:57.955845 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:55:57.958734 lvm[1401]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 12 23:55:57.959782 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:55:57.966400 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:55:57.967208 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:55:57.972282 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:55:57.973935 jq[1404]: false Aug 12 23:55:57.975352 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:55:57.979256 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:55:57.981497 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:55:57.987237 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:55:57.993302 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:55:57.993825 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:55:57.995703 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:55:57.998113 extend-filesystems[1405]: Found loop3 Aug 12 23:55:58.002065 extend-filesystems[1405]: Found loop4 Aug 12 23:55:58.002065 extend-filesystems[1405]: Found loop5 Aug 12 23:55:58.002065 extend-filesystems[1405]: Found vda Aug 12 23:55:58.002065 extend-filesystems[1405]: Found vda1 Aug 12 23:55:58.002065 extend-filesystems[1405]: Found vda2 Aug 12 23:55:58.002065 extend-filesystems[1405]: Found vda3 Aug 12 23:55:58.002065 extend-filesystems[1405]: Found usr Aug 12 23:55:58.002065 extend-filesystems[1405]: Found vda4 Aug 12 23:55:58.002065 extend-filesystems[1405]: Found vda6 Aug 12 23:55:58.002065 extend-filesystems[1405]: Found vda7 Aug 12 23:55:58.002065 extend-filesystems[1405]: Found vda9 Aug 12 23:55:58.002065 extend-filesystems[1405]: Checking size of /dev/vda9 Aug 12 23:55:58.001038 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:55:58.007206 dbus-daemon[1403]: [system] SELinux support is enabled Aug 12 23:55:58.002752 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 12 23:55:58.022014 jq[1420]: true Aug 12 23:55:58.006575 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:55:58.008041 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:55:58.008857 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:55:58.015543 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:55:58.015745 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:55:58.023633 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:55:58.026040 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:55:58.042975 systemd-logind[1412]: Watching system buttons on /dev/input/event0 (Power Button) Aug 12 23:55:58.047277 extend-filesystems[1405]: Resized partition /dev/vda9 Aug 12 23:55:58.047197 systemd-logind[1412]: New seat seat0. Aug 12 23:55:58.050262 (ntainerd)[1427]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:55:58.051784 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:55:58.051817 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:55:58.053094 tar[1425]: linux-arm64/helm Aug 12 23:55:58.061981 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1330) Aug 12 23:55:58.063823 extend-filesystems[1439]: resize2fs 1.47.1 (20-May-2024) Aug 12 23:55:58.085500 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:55:58.080118 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:55:58.080141 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:55:58.081660 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:55:58.097056 jq[1426]: true Aug 12 23:55:58.112258 update_engine[1418]: I20250812 23:55:58.111804 1418 main.cc:92] Flatcar Update Engine starting Aug 12 23:55:58.119693 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:55:58.123283 update_engine[1418]: I20250812 23:55:58.123222 1418 update_check_scheduler.cc:74] Next update check in 6m19s Aug 12 23:55:58.133317 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:55:58.174772 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:55:58.189162 extend-filesystems[1439]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:55:58.189162 extend-filesystems[1439]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:55:58.189162 extend-filesystems[1439]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:55:58.197631 extend-filesystems[1405]: Resized filesystem in /dev/vda9 Aug 12 23:55:58.198305 bash[1462]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:55:58.191590 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:55:58.191814 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:55:58.198308 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:55:58.202066 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 12 23:55:58.246387 locksmithd[1447]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:55:58.341539 containerd[1427]: time="2025-08-12T23:55:58.341426203Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 12 23:55:58.369556 containerd[1427]: time="2025-08-12T23:55:58.369501483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371260 containerd[1427]: time="2025-08-12T23:55:58.370978683Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371260 containerd[1427]: time="2025-08-12T23:55:58.371019003Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 12 23:55:58.371260 containerd[1427]: time="2025-08-12T23:55:58.371037243Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 12 23:55:58.371260 containerd[1427]: time="2025-08-12T23:55:58.371201443Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 12 23:55:58.371260 containerd[1427]: time="2025-08-12T23:55:58.371218123Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371422 containerd[1427]: time="2025-08-12T23:55:58.371270403Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371422 containerd[1427]: time="2025-08-12T23:55:58.371282923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371459 containerd[1427]: time="2025-08-12T23:55:58.371440123Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371459 containerd[1427]: time="2025-08-12T23:55:58.371454803Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371499 containerd[1427]: time="2025-08-12T23:55:58.371469003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371499 containerd[1427]: time="2025-08-12T23:55:58.371478763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371580 containerd[1427]: time="2025-08-12T23:55:58.371548643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371799 containerd[1427]: time="2025-08-12T23:55:58.371765003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371885 containerd[1427]: time="2025-08-12T23:55:58.371868883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 12 23:55:58.371909 containerd[1427]: time="2025-08-12T23:55:58.371886483Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 12 23:55:58.371997 containerd[1427]: time="2025-08-12T23:55:58.371981563Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 12 23:55:58.372041 containerd[1427]: time="2025-08-12T23:55:58.372029323Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:55:58.375794 containerd[1427]: time="2025-08-12T23:55:58.375710483Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 12 23:55:58.375794 containerd[1427]: time="2025-08-12T23:55:58.375775443Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 12 23:55:58.375794 containerd[1427]: time="2025-08-12T23:55:58.375796163Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 12 23:55:58.375957 containerd[1427]: time="2025-08-12T23:55:58.375813963Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 12 23:55:58.375957 containerd[1427]: time="2025-08-12T23:55:58.375828523Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 12 23:55:58.376056 containerd[1427]: time="2025-08-12T23:55:58.375995763Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 12 23:55:58.377665 containerd[1427]: time="2025-08-12T23:55:58.377507443Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 12 23:55:58.378125 containerd[1427]: time="2025-08-12T23:55:58.378100923Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 12 23:55:58.378160 containerd[1427]: time="2025-08-12T23:55:58.378130723Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 12 23:55:58.378160 containerd[1427]: time="2025-08-12T23:55:58.378146723Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 12 23:55:58.378205 containerd[1427]: time="2025-08-12T23:55:58.378161163Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 12 23:55:58.378233 containerd[1427]: time="2025-08-12T23:55:58.378211523Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 12 23:55:58.378233 containerd[1427]: time="2025-08-12T23:55:58.378224683Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 12 23:55:58.378315 containerd[1427]: time="2025-08-12T23:55:58.378297923Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 12 23:55:58.378340 containerd[1427]: time="2025-08-12T23:55:58.378324523Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 12 23:55:58.378359 containerd[1427]: time="2025-08-12T23:55:58.378340483Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 12 23:55:58.378359 containerd[1427]: time="2025-08-12T23:55:58.378353603Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 12 23:55:58.378403 containerd[1427]: time="2025-08-12T23:55:58.378366043Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 12 23:55:58.378425 containerd[1427]: time="2025-08-12T23:55:58.378402883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378425 containerd[1427]: time="2025-08-12T23:55:58.378417563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378466 containerd[1427]: time="2025-08-12T23:55:58.378430603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378466 containerd[1427]: time="2025-08-12T23:55:58.378447603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378466 containerd[1427]: time="2025-08-12T23:55:58.378460243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378524 containerd[1427]: time="2025-08-12T23:55:58.378472843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378524 containerd[1427]: time="2025-08-12T23:55:58.378484603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378524 containerd[1427]: time="2025-08-12T23:55:58.378498563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378524 containerd[1427]: time="2025-08-12T23:55:58.378511483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378600 containerd[1427]: time="2025-08-12T23:55:58.378526243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378620 containerd[1427]: time="2025-08-12T23:55:58.378538323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378639 containerd[1427]: time="2025-08-12T23:55:58.378622563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378661 containerd[1427]: time="2025-08-12T23:55:58.378637243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378661 containerd[1427]: time="2025-08-12T23:55:58.378654363Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 12 23:55:58.378697 containerd[1427]: time="2025-08-12T23:55:58.378676683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378697 containerd[1427]: time="2025-08-12T23:55:58.378693563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378732 containerd[1427]: time="2025-08-12T23:55:58.378705683Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 12 23:55:58.378886 containerd[1427]: time="2025-08-12T23:55:58.378820523Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 12 23:55:58.378909 containerd[1427]: time="2025-08-12T23:55:58.378897043Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 12 23:55:58.378931 containerd[1427]: time="2025-08-12T23:55:58.378911443Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 12 23:55:58.378931 containerd[1427]: time="2025-08-12T23:55:58.378923723Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 12 23:55:58.378982 containerd[1427]: time="2025-08-12T23:55:58.378934123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.378982 containerd[1427]: time="2025-08-12T23:55:58.378971163Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 12 23:55:58.379018 containerd[1427]: time="2025-08-12T23:55:58.378983323Z" level=info msg="NRI interface is disabled by configuration." Aug 12 23:55:58.379018 containerd[1427]: time="2025-08-12T23:55:58.378994123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 12 23:55:58.381974 containerd[1427]: time="2025-08-12T23:55:58.380143163Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 12 23:55:58.381974 containerd[1427]: time="2025-08-12T23:55:58.380228963Z" level=info msg="Connect containerd service" Aug 12 23:55:58.383115 containerd[1427]: time="2025-08-12T23:55:58.383031123Z" level=info msg="using legacy CRI server" Aug 12 23:55:58.383115 containerd[1427]: time="2025-08-12T23:55:58.383051163Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:55:58.383199 containerd[1427]: time="2025-08-12T23:55:58.383156003Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 12 23:55:58.383917 containerd[1427]: time="2025-08-12T23:55:58.383883483Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:55:58.384152 containerd[1427]: time="2025-08-12T23:55:58.384101043Z" level=info msg="Start subscribing containerd event" Aug 12 23:55:58.384266 containerd[1427]: time="2025-08-12T23:55:58.384224683Z" level=info msg="Start recovering state" Aug 12 23:55:58.384422 containerd[1427]: time="2025-08-12T23:55:58.384379083Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:55:58.384463 containerd[1427]: time="2025-08-12T23:55:58.384426523Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:55:58.385191 containerd[1427]: time="2025-08-12T23:55:58.384516683Z" level=info msg="Start event monitor" Aug 12 23:55:58.385191 containerd[1427]: time="2025-08-12T23:55:58.384540883Z" level=info msg="Start snapshots syncer" Aug 12 23:55:58.385191 containerd[1427]: time="2025-08-12T23:55:58.384550843Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:55:58.385191 containerd[1427]: time="2025-08-12T23:55:58.384558283Z" level=info msg="Start streaming server" Aug 12 23:55:58.384806 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:55:58.386679 containerd[1427]: time="2025-08-12T23:55:58.385848923Z" level=info msg="containerd successfully booted in 0.047020s" Aug 12 23:55:58.456215 tar[1425]: linux-arm64/LICENSE Aug 12 23:55:58.456215 tar[1425]: linux-arm64/README.md Aug 12 23:55:58.468521 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:55:58.976442 sshd_keygen[1423]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:55:59.001472 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:55:59.010234 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:55:59.015845 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:55:59.017019 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:55:59.019533 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:55:59.034007 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:55:59.042284 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:55:59.044579 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 12 23:55:59.045595 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:55:59.617114 systemd-networkd[1365]: eth0: Gained IPv6LL Aug 12 23:55:59.620926 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:55:59.622596 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:55:59.633231 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 12 23:55:59.635675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:55:59.637616 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:55:59.665603 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 12 23:55:59.665882 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 12 23:55:59.668691 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:55:59.672825 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:56:00.265324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:00.266858 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:56:00.268177 systemd[1]: Startup finished in 619ms (kernel) + 5.037s (initrd) + 4.113s (userspace) = 9.770s. Aug 12 23:56:00.270082 (kubelet)[1516]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:56:00.746247 kubelet[1516]: E0812 23:56:00.746131 1516 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:56:00.748985 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:56:00.749132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:56:04.187809 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:56:04.189654 systemd[1]: Started sshd@0-10.0.0.26:22-10.0.0.1:51744.service - OpenSSH per-connection server daemon (10.0.0.1:51744). Aug 12 23:56:04.327867 sshd[1530]: Accepted publickey for core from 10.0.0.1 port 51744 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:56:04.330590 sshd[1530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:04.342543 systemd-logind[1412]: New session 1 of user core. Aug 12 23:56:04.344125 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:56:04.367320 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:56:04.384911 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:56:04.388970 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:56:04.396457 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:56:04.490309 systemd[1534]: Queued start job for default target default.target. Aug 12 23:56:04.500024 systemd[1534]: Created slice app.slice - User Application Slice. Aug 12 23:56:04.500054 systemd[1534]: Reached target paths.target - Paths. Aug 12 23:56:04.500068 systemd[1534]: Reached target timers.target - Timers. Aug 12 23:56:04.501385 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:56:04.513750 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:56:04.514019 systemd[1534]: Reached target sockets.target - Sockets. Aug 12 23:56:04.514042 systemd[1534]: Reached target basic.target - Basic System. Aug 12 23:56:04.514091 systemd[1534]: Reached target default.target - Main User Target. Aug 12 23:56:04.514120 systemd[1534]: Startup finished in 111ms. Aug 12 23:56:04.514286 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:56:04.516238 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:56:04.587570 systemd[1]: Started sshd@1-10.0.0.26:22-10.0.0.1:51750.service - OpenSSH per-connection server daemon (10.0.0.1:51750). Aug 12 23:56:04.641246 sshd[1545]: Accepted publickey for core from 10.0.0.1 port 51750 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:56:04.642710 sshd[1545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:04.651482 systemd-logind[1412]: New session 2 of user core. Aug 12 23:56:04.661668 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:56:04.718068 sshd[1545]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:04.733497 systemd[1]: sshd@1-10.0.0.26:22-10.0.0.1:51750.service: Deactivated successfully. Aug 12 23:56:04.735785 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:56:04.738674 systemd-logind[1412]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:56:04.761391 systemd[1]: Started sshd@2-10.0.0.26:22-10.0.0.1:51756.service - OpenSSH per-connection server daemon (10.0.0.1:51756). Aug 12 23:56:04.762392 systemd-logind[1412]: Removed session 2. Aug 12 23:56:04.799915 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 51756 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:56:04.801445 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:04.807846 systemd-logind[1412]: New session 3 of user core. Aug 12 23:56:04.821221 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:56:04.870943 sshd[1552]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:04.889901 systemd[1]: sshd@2-10.0.0.26:22-10.0.0.1:51756.service: Deactivated successfully. Aug 12 23:56:04.892943 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:56:04.894999 systemd-logind[1412]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:56:04.897225 systemd[1]: Started sshd@3-10.0.0.26:22-10.0.0.1:51760.service - OpenSSH per-connection server daemon (10.0.0.1:51760). Aug 12 23:56:04.898142 systemd-logind[1412]: Removed session 3. Aug 12 23:56:04.940275 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 51760 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:56:04.942193 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:04.949333 systemd-logind[1412]: New session 4 of user core. Aug 12 23:56:04.963209 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:56:05.017555 sshd[1559]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:05.026909 systemd[1]: sshd@3-10.0.0.26:22-10.0.0.1:51760.service: Deactivated successfully. Aug 12 23:56:05.028816 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:56:05.031337 systemd-logind[1412]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:56:05.041603 systemd[1]: Started sshd@4-10.0.0.26:22-10.0.0.1:51770.service - OpenSSH per-connection server daemon (10.0.0.1:51770). Aug 12 23:56:05.042912 systemd-logind[1412]: Removed session 4. Aug 12 23:56:05.077961 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 51770 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:56:05.079354 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:05.083916 systemd-logind[1412]: New session 5 of user core. Aug 12 23:56:05.093207 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:56:05.159043 sudo[1569]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:56:05.159440 sudo[1569]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:56:05.179107 sudo[1569]: pam_unix(sudo:session): session closed for user root Aug 12 23:56:05.181207 sshd[1566]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:05.199914 systemd[1]: sshd@4-10.0.0.26:22-10.0.0.1:51770.service: Deactivated successfully. Aug 12 23:56:05.202702 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:56:05.205137 systemd-logind[1412]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:56:05.216313 systemd[1]: Started sshd@5-10.0.0.26:22-10.0.0.1:51774.service - OpenSSH per-connection server daemon (10.0.0.1:51774). Aug 12 23:56:05.217862 systemd-logind[1412]: Removed session 5. Aug 12 23:56:05.252590 sshd[1574]: Accepted publickey for core from 10.0.0.1 port 51774 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:56:05.254226 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:05.259905 systemd-logind[1412]: New session 6 of user core. Aug 12 23:56:05.273581 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:56:05.326792 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:56:05.327121 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:56:05.330617 sudo[1578]: pam_unix(sudo:session): session closed for user root Aug 12 23:56:05.339037 sudo[1577]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 12 23:56:05.339393 sudo[1577]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:56:05.355242 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 12 23:56:05.357271 auditctl[1581]: No rules Aug 12 23:56:05.358341 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:56:05.360007 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 12 23:56:05.364367 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 12 23:56:05.414094 augenrules[1599]: No rules Aug 12 23:56:05.415603 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 12 23:56:05.417170 sudo[1577]: pam_unix(sudo:session): session closed for user root Aug 12 23:56:05.419972 sshd[1574]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:05.428815 systemd[1]: sshd@5-10.0.0.26:22-10.0.0.1:51774.service: Deactivated successfully. Aug 12 23:56:05.431256 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:56:05.434366 systemd-logind[1412]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:56:05.447301 systemd[1]: Started sshd@6-10.0.0.26:22-10.0.0.1:51776.service - OpenSSH per-connection server daemon (10.0.0.1:51776). Aug 12 23:56:05.448682 systemd-logind[1412]: Removed session 6. Aug 12 23:56:05.489100 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 51776 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:56:05.490707 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:56:05.495815 systemd-logind[1412]: New session 7 of user core. Aug 12 23:56:05.505170 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:56:05.557972 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:56:05.558282 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:56:05.883310 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:56:05.883768 (dockerd)[1628]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:56:06.171845 dockerd[1628]: time="2025-08-12T23:56:06.171695763Z" level=info msg="Starting up" Aug 12 23:56:06.333201 dockerd[1628]: time="2025-08-12T23:56:06.333122003Z" level=info msg="Loading containers: start." Aug 12 23:56:06.440000 kernel: Initializing XFRM netlink socket Aug 12 23:56:06.507777 systemd-networkd[1365]: docker0: Link UP Aug 12 23:56:06.529564 dockerd[1628]: time="2025-08-12T23:56:06.529511283Z" level=info msg="Loading containers: done." Aug 12 23:56:06.543476 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2236151313-merged.mount: Deactivated successfully. Aug 12 23:56:06.546237 dockerd[1628]: time="2025-08-12T23:56:06.546159323Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:56:06.546343 dockerd[1628]: time="2025-08-12T23:56:06.546288923Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 12 23:56:06.546451 dockerd[1628]: time="2025-08-12T23:56:06.546433043Z" level=info msg="Daemon has completed initialization" Aug 12 23:56:06.578327 dockerd[1628]: time="2025-08-12T23:56:06.577897363Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:56:06.578259 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:56:07.259922 containerd[1427]: time="2025-08-12T23:56:07.259863683Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 12 23:56:08.033385 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695708814.mount: Deactivated successfully. Aug 12 23:56:09.004294 containerd[1427]: time="2025-08-12T23:56:09.004242243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:09.005686 containerd[1427]: time="2025-08-12T23:56:09.005572763Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=25651815" Aug 12 23:56:09.006838 containerd[1427]: time="2025-08-12T23:56:09.006775803Z" level=info msg="ImageCreate event name:\"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:09.010892 containerd[1427]: time="2025-08-12T23:56:09.010848123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:09.012305 containerd[1427]: time="2025-08-12T23:56:09.012263683Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"25648613\" in 1.75234716s" Aug 12 23:56:09.012397 containerd[1427]: time="2025-08-12T23:56:09.012311483Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:00a68b619a4bfa14c989a2181a7aa0726a5cb1272a7f65394e6a594ad6eade27\"" Aug 12 23:56:09.016754 containerd[1427]: time="2025-08-12T23:56:09.016683203Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 12 23:56:10.059998 containerd[1427]: time="2025-08-12T23:56:10.059693283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:10.061424 containerd[1427]: time="2025-08-12T23:56:10.061187483Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=22460285" Aug 12 23:56:10.064187 containerd[1427]: time="2025-08-12T23:56:10.064142843Z" level=info msg="ImageCreate event name:\"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:10.067659 containerd[1427]: time="2025-08-12T23:56:10.067586883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:10.068891 containerd[1427]: time="2025-08-12T23:56:10.068775003Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"23996073\" in 1.05204364s" Aug 12 23:56:10.068891 containerd[1427]: time="2025-08-12T23:56:10.068817603Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:5c5dc52b837451e0fe6108fdfb9cfa431191ce227ce71d103dec8a8c655c4e71\"" Aug 12 23:56:10.069868 containerd[1427]: time="2025-08-12T23:56:10.069510843Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 12 23:56:10.893340 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:56:10.904387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:11.023451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:11.028313 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:56:11.071425 kubelet[1849]: E0812 23:56:11.071294 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:56:11.074323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:56:11.074466 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:56:11.211043 containerd[1427]: time="2025-08-12T23:56:11.210873883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:11.211787 containerd[1427]: time="2025-08-12T23:56:11.211748043Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=17125091" Aug 12 23:56:11.212840 containerd[1427]: time="2025-08-12T23:56:11.212767283Z" level=info msg="ImageCreate event name:\"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:11.216896 containerd[1427]: time="2025-08-12T23:56:11.216835403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:11.218228 containerd[1427]: time="2025-08-12T23:56:11.218184363Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"18660897\" in 1.14860164s" Aug 12 23:56:11.218267 containerd[1427]: time="2025-08-12T23:56:11.218228843Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:89be0efdc4ab1793b9b1b05e836e33dc50f5b2911b57609b315b58608b2d3746\"" Aug 12 23:56:11.218880 containerd[1427]: time="2025-08-12T23:56:11.218850163Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 12 23:56:12.193737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1416556719.mount: Deactivated successfully. Aug 12 23:56:12.595355 containerd[1427]: time="2025-08-12T23:56:12.594936203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:12.596339 containerd[1427]: time="2025-08-12T23:56:12.595860403Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=26915995" Aug 12 23:56:12.598316 containerd[1427]: time="2025-08-12T23:56:12.598039083Z" level=info msg="ImageCreate event name:\"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:12.601203 containerd[1427]: time="2025-08-12T23:56:12.601157643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:12.602029 containerd[1427]: time="2025-08-12T23:56:12.601966563Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"26915012\" in 1.3829596s" Aug 12 23:56:12.602029 containerd[1427]: time="2025-08-12T23:56:12.602008683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:7d1e7db6660181423f98acbe3a495b3fe5cec9b85cdef245540cc2cb3b180ab0\"" Aug 12 23:56:12.602744 containerd[1427]: time="2025-08-12T23:56:12.602465163Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:56:13.150301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2691370723.mount: Deactivated successfully. Aug 12 23:56:13.934364 containerd[1427]: time="2025-08-12T23:56:13.934285803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:13.934920 containerd[1427]: time="2025-08-12T23:56:13.934884443Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 12 23:56:13.935821 containerd[1427]: time="2025-08-12T23:56:13.935761443Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:13.940349 containerd[1427]: time="2025-08-12T23:56:13.940294043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:13.943285 containerd[1427]: time="2025-08-12T23:56:13.943237803Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.3407392s" Aug 12 23:56:13.943285 containerd[1427]: time="2025-08-12T23:56:13.943285643Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 12 23:56:13.943896 containerd[1427]: time="2025-08-12T23:56:13.943869283Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:56:14.415189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099040392.mount: Deactivated successfully. Aug 12 23:56:14.422750 containerd[1427]: time="2025-08-12T23:56:14.422686923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:14.424089 containerd[1427]: time="2025-08-12T23:56:14.423993003Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 12 23:56:14.426281 containerd[1427]: time="2025-08-12T23:56:14.425158883Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:14.427594 containerd[1427]: time="2025-08-12T23:56:14.427561403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:14.428355 containerd[1427]: time="2025-08-12T23:56:14.428325443Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 484.41876ms" Aug 12 23:56:14.428427 containerd[1427]: time="2025-08-12T23:56:14.428362883Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 12 23:56:14.428881 containerd[1427]: time="2025-08-12T23:56:14.428793523Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 12 23:56:14.933051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518108875.mount: Deactivated successfully. Aug 12 23:56:16.564166 containerd[1427]: time="2025-08-12T23:56:16.564104323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:16.564866 containerd[1427]: time="2025-08-12T23:56:16.564826523Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Aug 12 23:56:16.565811 containerd[1427]: time="2025-08-12T23:56:16.565760963Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:16.570001 containerd[1427]: time="2025-08-12T23:56:16.569780843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:16.570725 containerd[1427]: time="2025-08-12T23:56:16.570663323Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.14183836s" Aug 12 23:56:16.570725 containerd[1427]: time="2025-08-12T23:56:16.570707203Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Aug 12 23:56:20.717492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:20.727217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:20.755019 systemd[1]: Reloading requested from client PID 2006 ('systemctl') (unit session-7.scope)... Aug 12 23:56:20.755242 systemd[1]: Reloading... Aug 12 23:56:20.833981 zram_generator::config[2045]: No configuration found. Aug 12 23:56:20.930873 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:56:20.985012 systemd[1]: Reloading finished in 229 ms. Aug 12 23:56:21.030881 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:21.034937 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:56:21.035155 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:21.036853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:21.150093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:21.155392 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:56:21.195026 kubelet[2092]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:56:21.195440 kubelet[2092]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:56:21.195505 kubelet[2092]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:56:21.195749 kubelet[2092]: I0812 23:56:21.195677 2092 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:56:21.802682 kubelet[2092]: I0812 23:56:21.802538 2092 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:56:21.802682 kubelet[2092]: I0812 23:56:21.802575 2092 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:56:21.803293 kubelet[2092]: I0812 23:56:21.802864 2092 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:56:21.856265 kubelet[2092]: E0812 23:56:21.856200 2092 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:21.857583 kubelet[2092]: I0812 23:56:21.857438 2092 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:56:21.869113 kubelet[2092]: E0812 23:56:21.869053 2092 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:56:21.869113 kubelet[2092]: I0812 23:56:21.869108 2092 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:56:21.880131 kubelet[2092]: I0812 23:56:21.880095 2092 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:56:21.880440 kubelet[2092]: I0812 23:56:21.880413 2092 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:56:21.880592 kubelet[2092]: I0812 23:56:21.880552 2092 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:56:21.880766 kubelet[2092]: I0812 23:56:21.880586 2092 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:56:21.880994 kubelet[2092]: I0812 23:56:21.880983 2092 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:56:21.881029 kubelet[2092]: I0812 23:56:21.880996 2092 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:56:21.881584 kubelet[2092]: I0812 23:56:21.881556 2092 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:56:21.886934 kubelet[2092]: I0812 23:56:21.886867 2092 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:56:21.886934 kubelet[2092]: I0812 23:56:21.886911 2092 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:56:21.886934 kubelet[2092]: I0812 23:56:21.886941 2092 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:56:21.887105 kubelet[2092]: I0812 23:56:21.886973 2092 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:56:21.898939 kubelet[2092]: W0812 23:56:21.898795 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 12 23:56:21.898939 kubelet[2092]: E0812 23:56:21.898893 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:21.900466 kubelet[2092]: I0812 23:56:21.900441 2092 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 12 23:56:21.901413 kubelet[2092]: W0812 23:56:21.901218 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 12 23:56:21.901413 kubelet[2092]: E0812 23:56:21.901277 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:21.901658 kubelet[2092]: I0812 23:56:21.901636 2092 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:56:21.901705 kubelet[2092]: W0812 23:56:21.901696 2092 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:56:21.904052 kubelet[2092]: I0812 23:56:21.904031 2092 server.go:1274] "Started kubelet" Aug 12 23:56:21.905111 kubelet[2092]: I0812 23:56:21.905063 2092 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:56:21.906972 kubelet[2092]: I0812 23:56:21.905636 2092 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:56:21.906972 kubelet[2092]: I0812 23:56:21.905935 2092 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:56:21.906972 kubelet[2092]: I0812 23:56:21.906731 2092 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:56:21.907568 kubelet[2092]: I0812 23:56:21.907540 2092 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:56:21.907964 kubelet[2092]: I0812 23:56:21.907931 2092 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:56:21.909456 kubelet[2092]: I0812 23:56:21.909153 2092 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:56:21.909456 kubelet[2092]: I0812 23:56:21.909297 2092 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:56:21.909456 kubelet[2092]: I0812 23:56:21.909357 2092 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:56:21.909985 kubelet[2092]: W0812 23:56:21.909927 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 12 23:56:21.910068 kubelet[2092]: E0812 23:56:21.909993 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:21.910477 kubelet[2092]: I0812 23:56:21.910451 2092 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:56:21.910590 kubelet[2092]: I0812 23:56:21.910565 2092 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:56:21.911366 kubelet[2092]: E0812 23:56:21.911338 2092 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:56:21.911868 kubelet[2092]: E0812 23:56:21.911840 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:56:21.911930 kubelet[2092]: E0812 23:56:21.911836 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="200ms" Aug 12 23:56:21.912449 kubelet[2092]: E0812 23:56:21.908570 2092 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2a4b1dee1e13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:56:21.903998483 +0000 UTC m=+0.745000441,LastTimestamp:2025-08-12 23:56:21.903998483 +0000 UTC m=+0.745000441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:56:21.912570 kubelet[2092]: I0812 23:56:21.912553 2092 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:56:21.928852 kubelet[2092]: I0812 23:56:21.928817 2092 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:56:21.928852 kubelet[2092]: I0812 23:56:21.928837 2092 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:56:21.928852 kubelet[2092]: I0812 23:56:21.928862 2092 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:56:21.934604 kubelet[2092]: I0812 23:56:21.934546 2092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:56:21.935847 kubelet[2092]: I0812 23:56:21.935653 2092 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:56:21.935847 kubelet[2092]: I0812 23:56:21.935686 2092 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:56:21.935847 kubelet[2092]: I0812 23:56:21.935706 2092 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:56:21.935847 kubelet[2092]: E0812 23:56:21.935758 2092 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:56:21.936510 kubelet[2092]: W0812 23:56:21.936457 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 12 23:56:21.936510 kubelet[2092]: E0812 23:56:21.936507 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:22.012250 kubelet[2092]: E0812 23:56:22.012180 2092 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:56:22.036480 kubelet[2092]: E0812 23:56:22.036363 2092 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:56:22.063021 kubelet[2092]: I0812 23:56:22.062848 2092 policy_none.go:49] "None policy: Start" Aug 12 23:56:22.063975 kubelet[2092]: I0812 23:56:22.063910 2092 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:56:22.063975 kubelet[2092]: I0812 23:56:22.063980 2092 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:56:22.072512 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:56:22.087788 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:56:22.091093 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:56:22.106037 kubelet[2092]: I0812 23:56:22.105945 2092 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:56:22.106527 kubelet[2092]: I0812 23:56:22.106191 2092 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:56:22.106527 kubelet[2092]: I0812 23:56:22.106209 2092 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:56:22.106527 kubelet[2092]: I0812 23:56:22.106461 2092 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:56:22.108384 kubelet[2092]: E0812 23:56:22.108348 2092 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:56:22.112410 kubelet[2092]: E0812 23:56:22.112362 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="400ms" Aug 12 23:56:22.208380 kubelet[2092]: I0812 23:56:22.208341 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:56:22.208882 kubelet[2092]: E0812 23:56:22.208840 2092 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Aug 12 23:56:22.245223 systemd[1]: Created slice kubepods-burstable-pod294790c90225a04c407a4c5c950efe66.slice - libcontainer container kubepods-burstable-pod294790c90225a04c407a4c5c950efe66.slice. Aug 12 23:56:22.265816 systemd[1]: Created slice kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice - libcontainer container kubepods-burstable-pod407c569889bb86d746b0274843003fd0.slice. Aug 12 23:56:22.288916 systemd[1]: Created slice kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice - libcontainer container kubepods-burstable-pod27e4a50e94f48ec00f6bd509cb48ed05.slice. Aug 12 23:56:22.311833 kubelet[2092]: I0812 23:56:22.311780 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/294790c90225a04c407a4c5c950efe66-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"294790c90225a04c407a4c5c950efe66\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:56:22.311833 kubelet[2092]: I0812 23:56:22.311822 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:56:22.311833 kubelet[2092]: I0812 23:56:22.311842 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:56:22.312064 kubelet[2092]: I0812 23:56:22.311858 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/294790c90225a04c407a4c5c950efe66-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"294790c90225a04c407a4c5c950efe66\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:56:22.312064 kubelet[2092]: I0812 23:56:22.311876 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/294790c90225a04c407a4c5c950efe66-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"294790c90225a04c407a4c5c950efe66\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:56:22.312064 kubelet[2092]: I0812 23:56:22.311894 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:56:22.312064 kubelet[2092]: I0812 23:56:22.311909 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:56:22.312064 kubelet[2092]: I0812 23:56:22.311925 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:56:22.312193 kubelet[2092]: I0812 23:56:22.311970 2092 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:56:22.410474 kubelet[2092]: I0812 23:56:22.410431 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:56:22.410803 kubelet[2092]: E0812 23:56:22.410772 2092 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Aug 12 23:56:22.513830 kubelet[2092]: E0812 23:56:22.513783 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="800ms" Aug 12 23:56:22.564068 kubelet[2092]: E0812 23:56:22.564026 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:22.564672 containerd[1427]: time="2025-08-12T23:56:22.564629003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:294790c90225a04c407a4c5c950efe66,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:22.587058 kubelet[2092]: E0812 23:56:22.586979 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:22.587528 containerd[1427]: time="2025-08-12T23:56:22.587464763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:22.591844 kubelet[2092]: E0812 23:56:22.591808 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:22.592284 containerd[1427]: time="2025-08-12T23:56:22.592241163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:22.613301 kubelet[2092]: E0812 23:56:22.613167 2092 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.26:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.26:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2a4b1dee1e13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:56:21.903998483 +0000 UTC m=+0.745000441,LastTimestamp:2025-08-12 23:56:21.903998483 +0000 UTC m=+0.745000441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:56:22.803234 kubelet[2092]: W0812 23:56:22.802437 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 12 23:56:22.803234 kubelet[2092]: E0812 23:56:22.802516 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:22.812169 kubelet[2092]: I0812 23:56:22.812133 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:56:22.812488 kubelet[2092]: E0812 23:56:22.812468 2092 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Aug 12 23:56:23.042488 kubelet[2092]: W0812 23:56:23.042441 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 12 23:56:23.042488 kubelet[2092]: E0812 23:56:23.042486 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:23.122593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1500261823.mount: Deactivated successfully. Aug 12 23:56:23.128508 containerd[1427]: time="2025-08-12T23:56:23.128454203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:56:23.131142 containerd[1427]: time="2025-08-12T23:56:23.131077683Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Aug 12 23:56:23.132416 containerd[1427]: time="2025-08-12T23:56:23.132372683Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:56:23.133326 containerd[1427]: time="2025-08-12T23:56:23.133297283Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:56:23.133802 containerd[1427]: time="2025-08-12T23:56:23.133760883Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:56:23.135017 containerd[1427]: time="2025-08-12T23:56:23.134982803Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:56:23.135624 containerd[1427]: time="2025-08-12T23:56:23.135582083Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 12 23:56:23.139573 containerd[1427]: time="2025-08-12T23:56:23.139524603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:56:23.140604 containerd[1427]: time="2025-08-12T23:56:23.140556843Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 575.82308ms" Aug 12 23:56:23.142606 containerd[1427]: time="2025-08-12T23:56:23.142564523Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 550.23972ms" Aug 12 23:56:23.146900 containerd[1427]: time="2025-08-12T23:56:23.146840883Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 559.24676ms" Aug 12 23:56:23.305324 containerd[1427]: time="2025-08-12T23:56:23.305164083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:23.305324 containerd[1427]: time="2025-08-12T23:56:23.305228003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:23.305324 containerd[1427]: time="2025-08-12T23:56:23.305244563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:23.305515 containerd[1427]: time="2025-08-12T23:56:23.305393403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:23.308363 containerd[1427]: time="2025-08-12T23:56:23.306418563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:23.308363 containerd[1427]: time="2025-08-12T23:56:23.308332523Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:23.308363 containerd[1427]: time="2025-08-12T23:56:23.308348243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:23.308636 containerd[1427]: time="2025-08-12T23:56:23.308445123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:23.312705 containerd[1427]: time="2025-08-12T23:56:23.312605963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:23.312941 containerd[1427]: time="2025-08-12T23:56:23.312725203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:23.312941 containerd[1427]: time="2025-08-12T23:56:23.312741043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:23.312941 containerd[1427]: time="2025-08-12T23:56:23.312834643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:23.314986 kubelet[2092]: E0812 23:56:23.314564 2092 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.26:6443: connect: connection refused" interval="1.6s" Aug 12 23:56:23.332213 systemd[1]: Started cri-containerd-8ad1036ffb9c5bdd8558dd7cca10c1f4563093ff606f93fbee2a882d2e79702b.scope - libcontainer container 8ad1036ffb9c5bdd8558dd7cca10c1f4563093ff606f93fbee2a882d2e79702b. Aug 12 23:56:23.337991 systemd[1]: Started cri-containerd-a876899d7b14bd5976cfbd2d2ab196365a31bf20e6fdb06720a1f80381e66f03.scope - libcontainer container a876899d7b14bd5976cfbd2d2ab196365a31bf20e6fdb06720a1f80381e66f03. Aug 12 23:56:23.339684 systemd[1]: Started cri-containerd-b4565ea191afad015ea3762d6c7b171a86d7b19b6eafd213761c330b1d2c260b.scope - libcontainer container b4565ea191afad015ea3762d6c7b171a86d7b19b6eafd213761c330b1d2c260b. Aug 12 23:56:23.369697 containerd[1427]: time="2025-08-12T23:56:23.369590523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ad1036ffb9c5bdd8558dd7cca10c1f4563093ff606f93fbee2a882d2e79702b\"" Aug 12 23:56:23.374041 kubelet[2092]: E0812 23:56:23.373835 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:23.379043 containerd[1427]: time="2025-08-12T23:56:23.378985843Z" level=info msg="CreateContainer within sandbox \"8ad1036ffb9c5bdd8558dd7cca10c1f4563093ff606f93fbee2a882d2e79702b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:56:23.383824 containerd[1427]: time="2025-08-12T23:56:23.383782283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:294790c90225a04c407a4c5c950efe66,Namespace:kube-system,Attempt:0,} returns sandbox id \"a876899d7b14bd5976cfbd2d2ab196365a31bf20e6fdb06720a1f80381e66f03\"" Aug 12 23:56:23.384811 kubelet[2092]: E0812 23:56:23.384682 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:23.386497 containerd[1427]: time="2025-08-12T23:56:23.386436443Z" level=info msg="CreateContainer within sandbox \"a876899d7b14bd5976cfbd2d2ab196365a31bf20e6fdb06720a1f80381e66f03\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:56:23.387440 containerd[1427]: time="2025-08-12T23:56:23.387392163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4565ea191afad015ea3762d6c7b171a86d7b19b6eafd213761c330b1d2c260b\"" Aug 12 23:56:23.388370 kubelet[2092]: E0812 23:56:23.388336 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:23.389993 containerd[1427]: time="2025-08-12T23:56:23.389934963Z" level=info msg="CreateContainer within sandbox \"b4565ea191afad015ea3762d6c7b171a86d7b19b6eafd213761c330b1d2c260b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:56:23.398300 containerd[1427]: time="2025-08-12T23:56:23.398078723Z" level=info msg="CreateContainer within sandbox \"8ad1036ffb9c5bdd8558dd7cca10c1f4563093ff606f93fbee2a882d2e79702b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4d559dbeadfdea850420f29a28a3258ab5e0d04ae43d3275cc7e92b2b89cc06b\"" Aug 12 23:56:23.398771 containerd[1427]: time="2025-08-12T23:56:23.398734523Z" level=info msg="StartContainer for \"4d559dbeadfdea850420f29a28a3258ab5e0d04ae43d3275cc7e92b2b89cc06b\"" Aug 12 23:56:23.401880 kubelet[2092]: W0812 23:56:23.401817 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 12 23:56:23.401998 kubelet[2092]: E0812 23:56:23.401884 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:23.404083 containerd[1427]: time="2025-08-12T23:56:23.404037763Z" level=info msg="CreateContainer within sandbox \"a876899d7b14bd5976cfbd2d2ab196365a31bf20e6fdb06720a1f80381e66f03\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3989dd6b21516e617146d8693da0b20e15593beb237edbb76276fd96b1777f89\"" Aug 12 23:56:23.404761 containerd[1427]: time="2025-08-12T23:56:23.404577723Z" level=info msg="StartContainer for \"3989dd6b21516e617146d8693da0b20e15593beb237edbb76276fd96b1777f89\"" Aug 12 23:56:23.406794 containerd[1427]: time="2025-08-12T23:56:23.406757683Z" level=info msg="CreateContainer within sandbox \"b4565ea191afad015ea3762d6c7b171a86d7b19b6eafd213761c330b1d2c260b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c7d6576a826d410321a7e5d9503ad6ff98830898739f5e3fad9028b749ca1b16\"" Aug 12 23:56:23.408061 containerd[1427]: time="2025-08-12T23:56:23.408036523Z" level=info msg="StartContainer for \"c7d6576a826d410321a7e5d9503ad6ff98830898739f5e3fad9028b749ca1b16\"" Aug 12 23:56:23.430155 systemd[1]: Started cri-containerd-4d559dbeadfdea850420f29a28a3258ab5e0d04ae43d3275cc7e92b2b89cc06b.scope - libcontainer container 4d559dbeadfdea850420f29a28a3258ab5e0d04ae43d3275cc7e92b2b89cc06b. Aug 12 23:56:23.434743 kubelet[2092]: W0812 23:56:23.434673 2092 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.26:6443: connect: connection refused Aug 12 23:56:23.434876 kubelet[2092]: E0812 23:56:23.434763 2092 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.26:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:56:23.441148 systemd[1]: Started cri-containerd-3989dd6b21516e617146d8693da0b20e15593beb237edbb76276fd96b1777f89.scope - libcontainer container 3989dd6b21516e617146d8693da0b20e15593beb237edbb76276fd96b1777f89. Aug 12 23:56:23.442669 systemd[1]: Started cri-containerd-c7d6576a826d410321a7e5d9503ad6ff98830898739f5e3fad9028b749ca1b16.scope - libcontainer container c7d6576a826d410321a7e5d9503ad6ff98830898739f5e3fad9028b749ca1b16. Aug 12 23:56:23.479299 containerd[1427]: time="2025-08-12T23:56:23.478230683Z" level=info msg="StartContainer for \"4d559dbeadfdea850420f29a28a3258ab5e0d04ae43d3275cc7e92b2b89cc06b\" returns successfully" Aug 12 23:56:23.479299 containerd[1427]: time="2025-08-12T23:56:23.478342443Z" level=info msg="StartContainer for \"3989dd6b21516e617146d8693da0b20e15593beb237edbb76276fd96b1777f89\" returns successfully" Aug 12 23:56:23.532190 containerd[1427]: time="2025-08-12T23:56:23.525074883Z" level=info msg="StartContainer for \"c7d6576a826d410321a7e5d9503ad6ff98830898739f5e3fad9028b749ca1b16\" returns successfully" Aug 12 23:56:23.614682 kubelet[2092]: I0812 23:56:23.614652 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:56:23.615016 kubelet[2092]: E0812 23:56:23.614992 2092 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.26:6443/api/v1/nodes\": dial tcp 10.0.0.26:6443: connect: connection refused" node="localhost" Aug 12 23:56:23.942245 kubelet[2092]: E0812 23:56:23.942161 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:23.944769 kubelet[2092]: E0812 23:56:23.944556 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:23.951047 kubelet[2092]: E0812 23:56:23.950922 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:24.949152 kubelet[2092]: E0812 23:56:24.949081 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:25.217299 kubelet[2092]: I0812 23:56:25.217144 2092 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:56:25.512497 kubelet[2092]: E0812 23:56:25.512375 2092 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 12 23:56:25.585637 kubelet[2092]: I0812 23:56:25.585036 2092 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:56:25.900376 kubelet[2092]: I0812 23:56:25.900305 2092 apiserver.go:52] "Watching apiserver" Aug 12 23:56:25.909739 kubelet[2092]: I0812 23:56:25.909703 2092 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:56:27.495726 kubelet[2092]: E0812 23:56:27.495679 2092 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:27.627338 systemd[1]: Reloading requested from client PID 2372 ('systemctl') (unit session-7.scope)... Aug 12 23:56:27.627354 systemd[1]: Reloading... Aug 12 23:56:27.714004 zram_generator::config[2411]: No configuration found. Aug 12 23:56:27.811682 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:56:27.881610 systemd[1]: Reloading finished in 253 ms. Aug 12 23:56:27.916733 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:27.938512 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:56:27.938778 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:27.938856 systemd[1]: kubelet.service: Consumed 1.207s CPU time, 132.1M memory peak, 0B memory swap peak. Aug 12 23:56:27.949450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:56:28.062309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:56:28.067513 (kubelet)[2453]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:56:28.115656 kubelet[2453]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:56:28.115656 kubelet[2453]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 12 23:56:28.115656 kubelet[2453]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:56:28.116032 kubelet[2453]: I0812 23:56:28.115743 2453 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:56:28.122089 kubelet[2453]: I0812 23:56:28.122046 2453 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 12 23:56:28.122089 kubelet[2453]: I0812 23:56:28.122080 2453 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:56:28.122364 kubelet[2453]: I0812 23:56:28.122338 2453 server.go:934] "Client rotation is on, will bootstrap in background" Aug 12 23:56:28.123754 kubelet[2453]: I0812 23:56:28.123728 2453 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:56:28.126449 kubelet[2453]: I0812 23:56:28.126416 2453 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:56:28.129686 kubelet[2453]: E0812 23:56:28.129657 2453 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 12 23:56:28.129686 kubelet[2453]: I0812 23:56:28.129687 2453 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 12 23:56:28.132522 kubelet[2453]: I0812 23:56:28.132392 2453 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:56:28.132625 kubelet[2453]: I0812 23:56:28.132605 2453 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 12 23:56:28.132788 kubelet[2453]: I0812 23:56:28.132751 2453 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:56:28.133162 kubelet[2453]: I0812 23:56:28.132780 2453 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:56:28.133407 kubelet[2453]: I0812 23:56:28.133385 2453 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:56:28.133407 kubelet[2453]: I0812 23:56:28.133410 2453 container_manager_linux.go:300] "Creating device plugin manager" Aug 12 23:56:28.133483 kubelet[2453]: I0812 23:56:28.133462 2453 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:56:28.133593 kubelet[2453]: I0812 23:56:28.133578 2453 kubelet.go:408] "Attempting to sync node with API server" Aug 12 23:56:28.133626 kubelet[2453]: I0812 23:56:28.133596 2453 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:56:28.133626 kubelet[2453]: I0812 23:56:28.133615 2453 kubelet.go:314] "Adding apiserver pod source" Aug 12 23:56:28.133717 kubelet[2453]: I0812 23:56:28.133626 2453 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:56:28.134584 kubelet[2453]: I0812 23:56:28.134416 2453 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 12 23:56:28.134909 kubelet[2453]: I0812 23:56:28.134888 2453 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:56:28.135350 kubelet[2453]: I0812 23:56:28.135318 2453 server.go:1274] "Started kubelet" Aug 12 23:56:28.135818 kubelet[2453]: I0812 23:56:28.135653 2453 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:56:28.138417 kubelet[2453]: I0812 23:56:28.136611 2453 server.go:449] "Adding debug handlers to kubelet server" Aug 12 23:56:28.138417 kubelet[2453]: I0812 23:56:28.137484 2453 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:56:28.139015 kubelet[2453]: I0812 23:56:28.138776 2453 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:56:28.139015 kubelet[2453]: I0812 23:56:28.138994 2453 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:56:28.139233 kubelet[2453]: I0812 23:56:28.139161 2453 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:56:28.139233 kubelet[2453]: I0812 23:56:28.139188 2453 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 12 23:56:28.140409 kubelet[2453]: I0812 23:56:28.139281 2453 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 12 23:56:28.140409 kubelet[2453]: I0812 23:56:28.139403 2453 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:56:28.140409 kubelet[2453]: E0812 23:56:28.139895 2453 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:56:28.140538 kubelet[2453]: I0812 23:56:28.140412 2453 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:56:28.141098 kubelet[2453]: I0812 23:56:28.140536 2453 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:56:28.141098 kubelet[2453]: E0812 23:56:28.140974 2453 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:56:28.149605 kubelet[2453]: I0812 23:56:28.149562 2453 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:56:28.172924 kubelet[2453]: I0812 23:56:28.172882 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:56:28.175308 kubelet[2453]: I0812 23:56:28.175083 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:56:28.175308 kubelet[2453]: I0812 23:56:28.175114 2453 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 12 23:56:28.175308 kubelet[2453]: I0812 23:56:28.175134 2453 kubelet.go:2321] "Starting kubelet main sync loop" Aug 12 23:56:28.175308 kubelet[2453]: E0812 23:56:28.175182 2453 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:56:28.196839 kubelet[2453]: I0812 23:56:28.196811 2453 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 12 23:56:28.196839 kubelet[2453]: I0812 23:56:28.196832 2453 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 12 23:56:28.196839 kubelet[2453]: I0812 23:56:28.196853 2453 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:56:28.197059 kubelet[2453]: I0812 23:56:28.197040 2453 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:56:28.197151 kubelet[2453]: I0812 23:56:28.197119 2453 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:56:28.197183 kubelet[2453]: I0812 23:56:28.197153 2453 policy_none.go:49] "None policy: Start" Aug 12 23:56:28.198035 kubelet[2453]: I0812 23:56:28.198001 2453 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 12 23:56:28.198035 kubelet[2453]: I0812 23:56:28.198027 2453 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:56:28.198196 kubelet[2453]: I0812 23:56:28.198181 2453 state_mem.go:75] "Updated machine memory state" Aug 12 23:56:28.202149 kubelet[2453]: I0812 23:56:28.202125 2453 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:56:28.202340 kubelet[2453]: I0812 23:56:28.202319 2453 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:56:28.202371 kubelet[2453]: I0812 23:56:28.202337 2453 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:56:28.202643 kubelet[2453]: I0812 23:56:28.202619 2453 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:56:28.286691 kubelet[2453]: E0812 23:56:28.286619 2453 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:56:28.307117 kubelet[2453]: I0812 23:56:28.307091 2453 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 12 23:56:28.314168 kubelet[2453]: I0812 23:56:28.314005 2453 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 12 23:56:28.314168 kubelet[2453]: I0812 23:56:28.314090 2453 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 12 23:56:28.440568 kubelet[2453]: I0812 23:56:28.440514 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:56:28.440568 kubelet[2453]: I0812 23:56:28.440562 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/294790c90225a04c407a4c5c950efe66-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"294790c90225a04c407a4c5c950efe66\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:56:28.440736 kubelet[2453]: I0812 23:56:28.440583 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:56:28.440736 kubelet[2453]: I0812 23:56:28.440600 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:56:28.440736 kubelet[2453]: I0812 23:56:28.440621 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:56:28.440736 kubelet[2453]: I0812 23:56:28.440640 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:56:28.440736 kubelet[2453]: I0812 23:56:28.440657 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:56:28.440856 kubelet[2453]: I0812 23:56:28.440672 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/294790c90225a04c407a4c5c950efe66-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"294790c90225a04c407a4c5c950efe66\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:56:28.440856 kubelet[2453]: I0812 23:56:28.440688 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/294790c90225a04c407a4c5c950efe66-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"294790c90225a04c407a4c5c950efe66\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:56:28.588006 kubelet[2453]: E0812 23:56:28.587757 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:28.588006 kubelet[2453]: E0812 23:56:28.587765 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:28.588006 kubelet[2453]: E0812 23:56:28.587982 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:29.135079 kubelet[2453]: I0812 23:56:29.135036 2453 apiserver.go:52] "Watching apiserver" Aug 12 23:56:29.140234 kubelet[2453]: I0812 23:56:29.140174 2453 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 12 23:56:29.188399 kubelet[2453]: E0812 23:56:29.188359 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:29.188399 kubelet[2453]: E0812 23:56:29.188401 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:29.193961 kubelet[2453]: E0812 23:56:29.193900 2453 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:56:29.194101 kubelet[2453]: E0812 23:56:29.194085 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:29.212686 kubelet[2453]: I0812 23:56:29.212534 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.212502043 podStartE2EDuration="2.212502043s" podCreationTimestamp="2025-08-12 23:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:56:29.212184563 +0000 UTC m=+1.141096801" watchObservedRunningTime="2025-08-12 23:56:29.212502043 +0000 UTC m=+1.141414281" Aug 12 23:56:29.230385 kubelet[2453]: I0812 23:56:29.229459 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.229442603 podStartE2EDuration="1.229442603s" podCreationTimestamp="2025-08-12 23:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:56:29.221865483 +0000 UTC m=+1.150777721" watchObservedRunningTime="2025-08-12 23:56:29.229442603 +0000 UTC m=+1.158354841" Aug 12 23:56:29.240185 kubelet[2453]: I0812 23:56:29.239972 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.239944643 podStartE2EDuration="1.239944643s" podCreationTimestamp="2025-08-12 23:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:56:29.229605883 +0000 UTC m=+1.158518121" watchObservedRunningTime="2025-08-12 23:56:29.239944643 +0000 UTC m=+1.168856841" Aug 12 23:56:30.189743 kubelet[2453]: E0812 23:56:30.189707 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:31.190943 kubelet[2453]: E0812 23:56:31.190912 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:33.026869 kubelet[2453]: I0812 23:56:33.026800 2453 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:56:33.027274 containerd[1427]: time="2025-08-12T23:56:33.027176054Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:56:33.027454 kubelet[2453]: I0812 23:56:33.027364 2453 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:56:33.323516 systemd[1]: Created slice kubepods-besteffort-podb1e32d45_08b0_4ba9_b68c_20c85ba5e359.slice - libcontainer container kubepods-besteffort-podb1e32d45_08b0_4ba9_b68c_20c85ba5e359.slice. Aug 12 23:56:33.373255 kubelet[2453]: I0812 23:56:33.373211 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngcbv\" (UniqueName: \"kubernetes.io/projected/b1e32d45-08b0-4ba9-b68c-20c85ba5e359-kube-api-access-ngcbv\") pod \"kube-proxy-9498c\" (UID: \"b1e32d45-08b0-4ba9-b68c-20c85ba5e359\") " pod="kube-system/kube-proxy-9498c" Aug 12 23:56:33.373255 kubelet[2453]: I0812 23:56:33.373259 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b1e32d45-08b0-4ba9-b68c-20c85ba5e359-kube-proxy\") pod \"kube-proxy-9498c\" (UID: \"b1e32d45-08b0-4ba9-b68c-20c85ba5e359\") " pod="kube-system/kube-proxy-9498c" Aug 12 23:56:33.373425 kubelet[2453]: I0812 23:56:33.373276 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1e32d45-08b0-4ba9-b68c-20c85ba5e359-xtables-lock\") pod \"kube-proxy-9498c\" (UID: \"b1e32d45-08b0-4ba9-b68c-20c85ba5e359\") " pod="kube-system/kube-proxy-9498c" Aug 12 23:56:33.373425 kubelet[2453]: I0812 23:56:33.373292 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1e32d45-08b0-4ba9-b68c-20c85ba5e359-lib-modules\") pod \"kube-proxy-9498c\" (UID: \"b1e32d45-08b0-4ba9-b68c-20c85ba5e359\") " pod="kube-system/kube-proxy-9498c" Aug 12 23:56:33.482485 kubelet[2453]: E0812 23:56:33.482439 2453 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 12 23:56:33.482485 kubelet[2453]: E0812 23:56:33.482482 2453 projected.go:194] Error preparing data for projected volume kube-api-access-ngcbv for pod kube-system/kube-proxy-9498c: configmap "kube-root-ca.crt" not found Aug 12 23:56:33.482668 kubelet[2453]: E0812 23:56:33.482543 2453 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b1e32d45-08b0-4ba9-b68c-20c85ba5e359-kube-api-access-ngcbv podName:b1e32d45-08b0-4ba9-b68c-20c85ba5e359 nodeName:}" failed. No retries permitted until 2025-08-12 23:56:33.982522629 +0000 UTC m=+5.911434867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ngcbv" (UniqueName: "kubernetes.io/projected/b1e32d45-08b0-4ba9-b68c-20c85ba5e359-kube-api-access-ngcbv") pod "kube-proxy-9498c" (UID: "b1e32d45-08b0-4ba9-b68c-20c85ba5e359") : configmap "kube-root-ca.crt" not found Aug 12 23:56:33.546011 kubelet[2453]: E0812 23:56:33.545029 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:33.726106 kubelet[2453]: E0812 23:56:33.725983 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:34.080039 kubelet[2453]: E0812 23:56:34.079677 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:34.141323 systemd[1]: Created slice kubepods-besteffort-pod3d8112b8_8f27_4154_b362_f8948715b9a6.slice - libcontainer container kubepods-besteffort-pod3d8112b8_8f27_4154_b362_f8948715b9a6.slice. Aug 12 23:56:34.179723 kubelet[2453]: I0812 23:56:34.179670 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d8112b8-8f27-4154-b362-f8948715b9a6-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-knw4v\" (UID: \"3d8112b8-8f27-4154-b362-f8948715b9a6\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-knw4v" Aug 12 23:56:34.179723 kubelet[2453]: I0812 23:56:34.179732 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w8sc\" (UniqueName: \"kubernetes.io/projected/3d8112b8-8f27-4154-b362-f8948715b9a6-kube-api-access-4w8sc\") pod \"tigera-operator-5bf8dfcb4-knw4v\" (UID: \"3d8112b8-8f27-4154-b362-f8948715b9a6\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-knw4v" Aug 12 23:56:34.195405 kubelet[2453]: E0812 23:56:34.195137 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:34.195405 kubelet[2453]: E0812 23:56:34.195203 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:34.195764 kubelet[2453]: E0812 23:56:34.195711 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:34.237876 kubelet[2453]: E0812 23:56:34.237832 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:34.239072 containerd[1427]: time="2025-08-12T23:56:34.238983059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9498c,Uid:b1e32d45-08b0-4ba9-b68c-20c85ba5e359,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:34.264300 containerd[1427]: time="2025-08-12T23:56:34.264215590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:34.264798 containerd[1427]: time="2025-08-12T23:56:34.264754830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:34.264845 containerd[1427]: time="2025-08-12T23:56:34.264821030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:34.265021 containerd[1427]: time="2025-08-12T23:56:34.264938070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:34.287208 systemd[1]: Started cri-containerd-8b1f1f09e71458bba946bf3075af75d5974394a41a215df725e3f45e23f8e6eb.scope - libcontainer container 8b1f1f09e71458bba946bf3075af75d5974394a41a215df725e3f45e23f8e6eb. Aug 12 23:56:34.315704 containerd[1427]: time="2025-08-12T23:56:34.315644892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9498c,Uid:b1e32d45-08b0-4ba9-b68c-20c85ba5e359,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b1f1f09e71458bba946bf3075af75d5974394a41a215df725e3f45e23f8e6eb\"" Aug 12 23:56:34.316518 kubelet[2453]: E0812 23:56:34.316491 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:34.320509 containerd[1427]: time="2025-08-12T23:56:34.320359254Z" level=info msg="CreateContainer within sandbox \"8b1f1f09e71458bba946bf3075af75d5974394a41a215df725e3f45e23f8e6eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:56:34.336418 containerd[1427]: time="2025-08-12T23:56:34.336300502Z" level=info msg="CreateContainer within sandbox \"8b1f1f09e71458bba946bf3075af75d5974394a41a215df725e3f45e23f8e6eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d5e4591739746d46ef1b8931a0542836d24e9acfa42c94df4efd71374b1e2a51\"" Aug 12 23:56:34.337088 containerd[1427]: time="2025-08-12T23:56:34.337052302Z" level=info msg="StartContainer for \"d5e4591739746d46ef1b8931a0542836d24e9acfa42c94df4efd71374b1e2a51\"" Aug 12 23:56:34.363159 systemd[1]: Started cri-containerd-d5e4591739746d46ef1b8931a0542836d24e9acfa42c94df4efd71374b1e2a51.scope - libcontainer container d5e4591739746d46ef1b8931a0542836d24e9acfa42c94df4efd71374b1e2a51. Aug 12 23:56:34.388005 containerd[1427]: time="2025-08-12T23:56:34.387804324Z" level=info msg="StartContainer for \"d5e4591739746d46ef1b8931a0542836d24e9acfa42c94df4efd71374b1e2a51\" returns successfully" Aug 12 23:56:34.444509 containerd[1427]: time="2025-08-12T23:56:34.444380629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-knw4v,Uid:3d8112b8-8f27-4154-b362-f8948715b9a6,Namespace:tigera-operator,Attempt:0,}" Aug 12 23:56:34.471784 containerd[1427]: time="2025-08-12T23:56:34.471671761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:34.471977 containerd[1427]: time="2025-08-12T23:56:34.471805001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:34.471977 containerd[1427]: time="2025-08-12T23:56:34.471819561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:34.472109 containerd[1427]: time="2025-08-12T23:56:34.471937801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:34.489152 systemd[1]: Started cri-containerd-9dbc832b797dfeee096715332c42de88296e42411a6d2c9422c97547163e9e5d.scope - libcontainer container 9dbc832b797dfeee096715332c42de88296e42411a6d2c9422c97547163e9e5d. Aug 12 23:56:34.548907 containerd[1427]: time="2025-08-12T23:56:34.548792635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-knw4v,Uid:3d8112b8-8f27-4154-b362-f8948715b9a6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9dbc832b797dfeee096715332c42de88296e42411a6d2c9422c97547163e9e5d\"" Aug 12 23:56:34.550687 containerd[1427]: time="2025-08-12T23:56:34.550646196Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 12 23:56:35.198507 kubelet[2453]: E0812 23:56:35.198454 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:35.198893 kubelet[2453]: E0812 23:56:35.198814 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:35.211894 kubelet[2453]: I0812 23:56:35.211322 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9498c" podStartSLOduration=2.211302642 podStartE2EDuration="2.211302642s" podCreationTimestamp="2025-08-12 23:56:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:56:35.211298562 +0000 UTC m=+7.140210800" watchObservedRunningTime="2025-08-12 23:56:35.211302642 +0000 UTC m=+7.140214880" Aug 12 23:56:35.681623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3478646342.mount: Deactivated successfully. Aug 12 23:56:35.992665 containerd[1427]: time="2025-08-12T23:56:35.992538486Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:35.993645 containerd[1427]: time="2025-08-12T23:56:35.993615087Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 12 23:56:35.994778 containerd[1427]: time="2025-08-12T23:56:35.994731007Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:35.997284 containerd[1427]: time="2025-08-12T23:56:35.997245488Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:35.998604 containerd[1427]: time="2025-08-12T23:56:35.998569529Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.447885973s" Aug 12 23:56:35.998637 containerd[1427]: time="2025-08-12T23:56:35.998604249Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 12 23:56:36.001633 containerd[1427]: time="2025-08-12T23:56:36.001594450Z" level=info msg="CreateContainer within sandbox \"9dbc832b797dfeee096715332c42de88296e42411a6d2c9422c97547163e9e5d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 12 23:56:36.012270 containerd[1427]: time="2025-08-12T23:56:36.012218254Z" level=info msg="CreateContainer within sandbox \"9dbc832b797dfeee096715332c42de88296e42411a6d2c9422c97547163e9e5d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"54a20131fb5e7081bef23bb24fbdeb25e3844308df9414917dcf78aebe9d8c10\"" Aug 12 23:56:36.014342 containerd[1427]: time="2025-08-12T23:56:36.013495935Z" level=info msg="StartContainer for \"54a20131fb5e7081bef23bb24fbdeb25e3844308df9414917dcf78aebe9d8c10\"" Aug 12 23:56:36.041173 systemd[1]: Started cri-containerd-54a20131fb5e7081bef23bb24fbdeb25e3844308df9414917dcf78aebe9d8c10.scope - libcontainer container 54a20131fb5e7081bef23bb24fbdeb25e3844308df9414917dcf78aebe9d8c10. Aug 12 23:56:36.064763 containerd[1427]: time="2025-08-12T23:56:36.064717874Z" level=info msg="StartContainer for \"54a20131fb5e7081bef23bb24fbdeb25e3844308df9414917dcf78aebe9d8c10\" returns successfully" Aug 12 23:56:41.566776 sudo[1610]: pam_unix(sudo:session): session closed for user root Aug 12 23:56:41.575760 sshd[1607]: pam_unix(sshd:session): session closed for user core Aug 12 23:56:41.584904 systemd[1]: sshd@6-10.0.0.26:22-10.0.0.1:51776.service: Deactivated successfully. Aug 12 23:56:41.586660 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:56:41.586897 systemd[1]: session-7.scope: Consumed 6.300s CPU time, 147.9M memory peak, 0B memory swap peak. Aug 12 23:56:41.587492 systemd-logind[1412]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:56:41.588524 systemd-logind[1412]: Removed session 7. Aug 12 23:56:43.500054 update_engine[1418]: I20250812 23:56:43.499977 1418 update_attempter.cc:509] Updating boot flags... Aug 12 23:56:43.613033 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2867) Aug 12 23:56:46.965074 kubelet[2453]: I0812 23:56:46.964173 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-knw4v" podStartSLOduration=11.51464287 podStartE2EDuration="12.964109483s" podCreationTimestamp="2025-08-12 23:56:34 +0000 UTC" firstStartedPulling="2025-08-12 23:56:34.550011316 +0000 UTC m=+6.478923554" lastFinishedPulling="2025-08-12 23:56:35.999477929 +0000 UTC m=+7.928390167" observedRunningTime="2025-08-12 23:56:36.210626051 +0000 UTC m=+8.139538289" watchObservedRunningTime="2025-08-12 23:56:46.964109483 +0000 UTC m=+18.893021721" Aug 12 23:56:46.984907 systemd[1]: Created slice kubepods-besteffort-pode7b26aa3_ddf4_459a_b394_d40748972487.slice - libcontainer container kubepods-besteffort-pode7b26aa3_ddf4_459a_b394_d40748972487.slice. Aug 12 23:56:47.074571 kubelet[2453]: I0812 23:56:47.074468 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7b26aa3-ddf4-459a-b394-d40748972487-tigera-ca-bundle\") pod \"calico-typha-6d6884fc69-pls5t\" (UID: \"e7b26aa3-ddf4-459a-b394-d40748972487\") " pod="calico-system/calico-typha-6d6884fc69-pls5t" Aug 12 23:56:47.074571 kubelet[2453]: I0812 23:56:47.074520 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e7b26aa3-ddf4-459a-b394-d40748972487-typha-certs\") pod \"calico-typha-6d6884fc69-pls5t\" (UID: \"e7b26aa3-ddf4-459a-b394-d40748972487\") " pod="calico-system/calico-typha-6d6884fc69-pls5t" Aug 12 23:56:47.074571 kubelet[2453]: I0812 23:56:47.074542 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9wnf\" (UniqueName: \"kubernetes.io/projected/e7b26aa3-ddf4-459a-b394-d40748972487-kube-api-access-v9wnf\") pod \"calico-typha-6d6884fc69-pls5t\" (UID: \"e7b26aa3-ddf4-459a-b394-d40748972487\") " pod="calico-system/calico-typha-6d6884fc69-pls5t" Aug 12 23:56:47.218105 systemd[1]: Created slice kubepods-besteffort-pod14d49ad2_2259_40de_b0df_51e5874607d1.slice - libcontainer container kubepods-besteffort-pod14d49ad2_2259_40de_b0df_51e5874607d1.slice. Aug 12 23:56:47.287942 kubelet[2453]: E0812 23:56:47.287871 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:47.288747 containerd[1427]: time="2025-08-12T23:56:47.288446465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d6884fc69-pls5t,Uid:e7b26aa3-ddf4-459a-b394-d40748972487,Namespace:calico-system,Attempt:0,}" Aug 12 23:56:47.313143 containerd[1427]: time="2025-08-12T23:56:47.312854510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:47.313143 containerd[1427]: time="2025-08-12T23:56:47.313031190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:47.314755 containerd[1427]: time="2025-08-12T23:56:47.314681070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:47.314856 containerd[1427]: time="2025-08-12T23:56:47.314829070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:47.345188 systemd[1]: Started cri-containerd-994e7ebfe5b517d6b8029ef879e5b5bb10c49488abe21d952ad8a117904bf375.scope - libcontainer container 994e7ebfe5b517d6b8029ef879e5b5bb10c49488abe21d952ad8a117904bf375. Aug 12 23:56:47.377680 kubelet[2453]: I0812 23:56:47.377509 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/14d49ad2-2259-40de-b0df-51e5874607d1-node-certs\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.377680 kubelet[2453]: I0812 23:56:47.377581 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/14d49ad2-2259-40de-b0df-51e5874607d1-cni-bin-dir\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.377680 kubelet[2453]: I0812 23:56:47.377639 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/14d49ad2-2259-40de-b0df-51e5874607d1-cni-log-dir\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.381453 kubelet[2453]: I0812 23:56:47.378325 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/14d49ad2-2259-40de-b0df-51e5874607d1-lib-modules\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.381453 kubelet[2453]: I0812 23:56:47.380561 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9ldz\" (UniqueName: \"kubernetes.io/projected/14d49ad2-2259-40de-b0df-51e5874607d1-kube-api-access-h9ldz\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.381453 kubelet[2453]: I0812 23:56:47.380599 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/14d49ad2-2259-40de-b0df-51e5874607d1-cni-net-dir\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.381453 kubelet[2453]: I0812 23:56:47.380666 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/14d49ad2-2259-40de-b0df-51e5874607d1-var-lib-calico\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.381453 kubelet[2453]: I0812 23:56:47.380737 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/14d49ad2-2259-40de-b0df-51e5874607d1-var-run-calico\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.381843 kubelet[2453]: I0812 23:56:47.380865 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/14d49ad2-2259-40de-b0df-51e5874607d1-policysync\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.381843 kubelet[2453]: I0812 23:56:47.380906 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14d49ad2-2259-40de-b0df-51e5874607d1-tigera-ca-bundle\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.381843 kubelet[2453]: I0812 23:56:47.381073 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/14d49ad2-2259-40de-b0df-51e5874607d1-xtables-lock\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.381843 kubelet[2453]: I0812 23:56:47.381293 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/14d49ad2-2259-40de-b0df-51e5874607d1-flexvol-driver-host\") pod \"calico-node-p4ws9\" (UID: \"14d49ad2-2259-40de-b0df-51e5874607d1\") " pod="calico-system/calico-node-p4ws9" Aug 12 23:56:47.405787 containerd[1427]: time="2025-08-12T23:56:47.405735807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6d6884fc69-pls5t,Uid:e7b26aa3-ddf4-459a-b394-d40748972487,Namespace:calico-system,Attempt:0,} returns sandbox id \"994e7ebfe5b517d6b8029ef879e5b5bb10c49488abe21d952ad8a117904bf375\"" Aug 12 23:56:47.407362 kubelet[2453]: E0812 23:56:47.407320 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:47.411814 containerd[1427]: time="2025-08-12T23:56:47.411587969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 12 23:56:47.494107 kubelet[2453]: E0812 23:56:47.493298 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w79sn" podUID="c0354a12-b391-42fd-af3e-1ce2798bd729" Aug 12 23:56:47.499496 kubelet[2453]: E0812 23:56:47.499457 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.499716 kubelet[2453]: W0812 23:56:47.499696 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.499813 kubelet[2453]: E0812 23:56:47.499797 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.506026 kubelet[2453]: E0812 23:56:47.505994 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.506026 kubelet[2453]: W0812 23:56:47.506018 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.506181 kubelet[2453]: E0812 23:56:47.506038 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.522092 containerd[1427]: time="2025-08-12T23:56:47.522040430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4ws9,Uid:14d49ad2-2259-40de-b0df-51e5874607d1,Namespace:calico-system,Attempt:0,}" Aug 12 23:56:47.548298 containerd[1427]: time="2025-08-12T23:56:47.548128555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:56:47.548298 containerd[1427]: time="2025-08-12T23:56:47.548220715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:56:47.548298 containerd[1427]: time="2025-08-12T23:56:47.548234155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:47.548742 containerd[1427]: time="2025-08-12T23:56:47.548514155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:56:47.570297 systemd[1]: Started cri-containerd-7c0d7a7b074d28164f4d616ff90e32db2dd0be523b0162bdff414cbdf3def042.scope - libcontainer container 7c0d7a7b074d28164f4d616ff90e32db2dd0be523b0162bdff414cbdf3def042. Aug 12 23:56:47.580503 kubelet[2453]: E0812 23:56:47.580465 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.580503 kubelet[2453]: W0812 23:56:47.580495 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.580675 kubelet[2453]: E0812 23:56:47.580522 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.581755 kubelet[2453]: E0812 23:56:47.581701 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.581755 kubelet[2453]: W0812 23:56:47.581735 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.581913 kubelet[2453]: E0812 23:56:47.581772 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.582613 kubelet[2453]: E0812 23:56:47.582574 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.582613 kubelet[2453]: W0812 23:56:47.582599 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.583130 kubelet[2453]: E0812 23:56:47.582619 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.583130 kubelet[2453]: E0812 23:56:47.583004 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.583130 kubelet[2453]: W0812 23:56:47.583018 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.583130 kubelet[2453]: E0812 23:56:47.583031 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.586026 kubelet[2453]: E0812 23:56:47.585988 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.586026 kubelet[2453]: W0812 23:56:47.586016 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.586202 kubelet[2453]: E0812 23:56:47.586038 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.586352 kubelet[2453]: E0812 23:56:47.586322 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.586352 kubelet[2453]: W0812 23:56:47.586349 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.586438 kubelet[2453]: E0812 23:56:47.586361 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.586600 kubelet[2453]: E0812 23:56:47.586561 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.586600 kubelet[2453]: W0812 23:56:47.586589 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.586600 kubelet[2453]: E0812 23:56:47.586598 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.587134 kubelet[2453]: E0812 23:56:47.587105 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.587134 kubelet[2453]: W0812 23:56:47.587122 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.587134 kubelet[2453]: E0812 23:56:47.587135 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.587502 kubelet[2453]: E0812 23:56:47.587386 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.587502 kubelet[2453]: W0812 23:56:47.587400 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.587502 kubelet[2453]: E0812 23:56:47.587414 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.587765 kubelet[2453]: E0812 23:56:47.587647 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.587765 kubelet[2453]: W0812 23:56:47.587660 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.587765 kubelet[2453]: E0812 23:56:47.587709 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.588026 kubelet[2453]: E0812 23:56:47.588012 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.588026 kubelet[2453]: W0812 23:56:47.588024 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.588140 kubelet[2453]: E0812 23:56:47.588045 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.589092 kubelet[2453]: E0812 23:56:47.589071 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.589125 kubelet[2453]: W0812 23:56:47.589091 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.589125 kubelet[2453]: E0812 23:56:47.589105 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.589992 kubelet[2453]: E0812 23:56:47.589972 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.590072 kubelet[2453]: W0812 23:56:47.589998 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.590072 kubelet[2453]: E0812 23:56:47.590014 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.591240 kubelet[2453]: E0812 23:56:47.591215 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.591313 kubelet[2453]: W0812 23:56:47.591241 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.591313 kubelet[2453]: E0812 23:56:47.591262 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.592777 kubelet[2453]: E0812 23:56:47.592745 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.592777 kubelet[2453]: W0812 23:56:47.592777 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.592903 kubelet[2453]: E0812 23:56:47.592801 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.593361 kubelet[2453]: E0812 23:56:47.593343 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.593442 kubelet[2453]: W0812 23:56:47.593361 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.593442 kubelet[2453]: E0812 23:56:47.593374 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.593648 kubelet[2453]: E0812 23:56:47.593630 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.593648 kubelet[2453]: W0812 23:56:47.593646 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.593729 kubelet[2453]: E0812 23:56:47.593660 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.593994 kubelet[2453]: E0812 23:56:47.593979 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.594065 kubelet[2453]: W0812 23:56:47.594049 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.594102 kubelet[2453]: E0812 23:56:47.594067 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.594371 kubelet[2453]: E0812 23:56:47.594343 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.594371 kubelet[2453]: W0812 23:56:47.594358 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.594371 kubelet[2453]: E0812 23:56:47.594367 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.594607 kubelet[2453]: E0812 23:56:47.594590 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.594607 kubelet[2453]: W0812 23:56:47.594607 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.594684 kubelet[2453]: E0812 23:56:47.594618 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.595138 kubelet[2453]: E0812 23:56:47.595003 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.595138 kubelet[2453]: W0812 23:56:47.595022 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.595138 kubelet[2453]: E0812 23:56:47.595041 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.595138 kubelet[2453]: I0812 23:56:47.595068 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0354a12-b391-42fd-af3e-1ce2798bd729-registration-dir\") pod \"csi-node-driver-w79sn\" (UID: \"c0354a12-b391-42fd-af3e-1ce2798bd729\") " pod="calico-system/csi-node-driver-w79sn" Aug 12 23:56:47.595467 kubelet[2453]: E0812 23:56:47.595306 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.595467 kubelet[2453]: W0812 23:56:47.595318 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.595467 kubelet[2453]: E0812 23:56:47.595335 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.595467 kubelet[2453]: I0812 23:56:47.595352 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0354a12-b391-42fd-af3e-1ce2798bd729-kubelet-dir\") pod \"csi-node-driver-w79sn\" (UID: \"c0354a12-b391-42fd-af3e-1ce2798bd729\") " pod="calico-system/csi-node-driver-w79sn" Aug 12 23:56:47.595938 kubelet[2453]: E0812 23:56:47.595899 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.595938 kubelet[2453]: W0812 23:56:47.595919 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.596321 kubelet[2453]: E0812 23:56:47.596171 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.597133 kubelet[2453]: E0812 23:56:47.596752 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.597133 kubelet[2453]: W0812 23:56:47.596773 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.598111 kubelet[2453]: E0812 23:56:47.597510 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.599457 kubelet[2453]: E0812 23:56:47.599071 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.599457 kubelet[2453]: W0812 23:56:47.599097 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.599993 kubelet[2453]: E0812 23:56:47.599592 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.599993 kubelet[2453]: I0812 23:56:47.599649 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0354a12-b391-42fd-af3e-1ce2798bd729-socket-dir\") pod \"csi-node-driver-w79sn\" (UID: \"c0354a12-b391-42fd-af3e-1ce2798bd729\") " pod="calico-system/csi-node-driver-w79sn" Aug 12 23:56:47.600963 kubelet[2453]: E0812 23:56:47.600858 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.601354 kubelet[2453]: W0812 23:56:47.601323 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.601955 kubelet[2453]: E0812 23:56:47.601460 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.602133 kubelet[2453]: E0812 23:56:47.602116 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.602407 kubelet[2453]: W0812 23:56:47.602380 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.602968 kubelet[2453]: E0812 23:56:47.602709 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.603598 kubelet[2453]: E0812 23:56:47.603391 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.603598 kubelet[2453]: W0812 23:56:47.603409 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.604327 kubelet[2453]: E0812 23:56:47.604072 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.604327 kubelet[2453]: I0812 23:56:47.604119 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsk8d\" (UniqueName: \"kubernetes.io/projected/c0354a12-b391-42fd-af3e-1ce2798bd729-kube-api-access-xsk8d\") pod \"csi-node-driver-w79sn\" (UID: \"c0354a12-b391-42fd-af3e-1ce2798bd729\") " pod="calico-system/csi-node-driver-w79sn" Aug 12 23:56:47.605567 kubelet[2453]: E0812 23:56:47.605232 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.605567 kubelet[2453]: W0812 23:56:47.605250 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.605567 kubelet[2453]: E0812 23:56:47.605271 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.606526 kubelet[2453]: E0812 23:56:47.606041 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.606526 kubelet[2453]: W0812 23:56:47.606058 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.606526 kubelet[2453]: E0812 23:56:47.606292 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.607606 kubelet[2453]: E0812 23:56:47.607148 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.607606 kubelet[2453]: W0812 23:56:47.607170 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.607606 kubelet[2453]: E0812 23:56:47.607245 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.607606 kubelet[2453]: I0812 23:56:47.607286 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c0354a12-b391-42fd-af3e-1ce2798bd729-varrun\") pod \"csi-node-driver-w79sn\" (UID: \"c0354a12-b391-42fd-af3e-1ce2798bd729\") " pod="calico-system/csi-node-driver-w79sn" Aug 12 23:56:47.608271 kubelet[2453]: E0812 23:56:47.608015 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.608271 kubelet[2453]: W0812 23:56:47.608170 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.608271 kubelet[2453]: E0812 23:56:47.608203 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.609011 kubelet[2453]: E0812 23:56:47.608734 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.609164 kubelet[2453]: W0812 23:56:47.609113 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.609362 kubelet[2453]: E0812 23:56:47.609168 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.609892 kubelet[2453]: E0812 23:56:47.609689 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.609892 kubelet[2453]: W0812 23:56:47.609708 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.609892 kubelet[2453]: E0812 23:56:47.609724 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.610881 kubelet[2453]: E0812 23:56:47.610807 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.610881 kubelet[2453]: W0812 23:56:47.610828 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.610881 kubelet[2453]: E0812 23:56:47.610848 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.622292 containerd[1427]: time="2025-08-12T23:56:47.622241209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4ws9,Uid:14d49ad2-2259-40de-b0df-51e5874607d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c0d7a7b074d28164f4d616ff90e32db2dd0be523b0162bdff414cbdf3def042\"" Aug 12 23:56:47.708179 kubelet[2453]: E0812 23:56:47.708143 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.708179 kubelet[2453]: W0812 23:56:47.708173 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.708359 kubelet[2453]: E0812 23:56:47.708196 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.708502 kubelet[2453]: E0812 23:56:47.708476 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.708502 kubelet[2453]: W0812 23:56:47.708490 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.708564 kubelet[2453]: E0812 23:56:47.708505 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.708760 kubelet[2453]: E0812 23:56:47.708746 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.708760 kubelet[2453]: W0812 23:56:47.708759 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.708840 kubelet[2453]: E0812 23:56:47.708773 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.709089 kubelet[2453]: E0812 23:56:47.709071 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.709089 kubelet[2453]: W0812 23:56:47.709085 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.709166 kubelet[2453]: E0812 23:56:47.709156 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.709514 kubelet[2453]: E0812 23:56:47.709499 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.709514 kubelet[2453]: W0812 23:56:47.709515 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.709769 kubelet[2453]: E0812 23:56:47.709597 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.709917 kubelet[2453]: E0812 23:56:47.709888 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.709917 kubelet[2453]: W0812 23:56:47.709908 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.710111 kubelet[2453]: E0812 23:56:47.709982 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.710200 kubelet[2453]: E0812 23:56:47.710133 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.710200 kubelet[2453]: W0812 23:56:47.710142 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.710200 kubelet[2453]: E0812 23:56:47.710171 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.710330 kubelet[2453]: E0812 23:56:47.710311 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.710330 kubelet[2453]: W0812 23:56:47.710321 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.710512 kubelet[2453]: E0812 23:56:47.710363 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.710584 kubelet[2453]: E0812 23:56:47.710539 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.710584 kubelet[2453]: W0812 23:56:47.710549 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.710584 kubelet[2453]: E0812 23:56:47.710566 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.710936 kubelet[2453]: E0812 23:56:47.710801 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.710936 kubelet[2453]: W0812 23:56:47.710817 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.710936 kubelet[2453]: E0812 23:56:47.710827 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.711768 kubelet[2453]: E0812 23:56:47.711733 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.711937 kubelet[2453]: W0812 23:56:47.711868 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.711937 kubelet[2453]: E0812 23:56:47.711899 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.712241 kubelet[2453]: E0812 23:56:47.712225 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.712399 kubelet[2453]: W0812 23:56:47.712307 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.712399 kubelet[2453]: E0812 23:56:47.712360 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.712689 kubelet[2453]: E0812 23:56:47.712673 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.712756 kubelet[2453]: W0812 23:56:47.712742 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.712907 kubelet[2453]: E0812 23:56:47.712860 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.713055 kubelet[2453]: E0812 23:56:47.713041 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.713249 kubelet[2453]: W0812 23:56:47.713118 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.713249 kubelet[2453]: E0812 23:56:47.713174 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.713399 kubelet[2453]: E0812 23:56:47.713385 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.713504 kubelet[2453]: W0812 23:56:47.713476 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.713612 kubelet[2453]: E0812 23:56:47.713581 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.713870 kubelet[2453]: E0812 23:56:47.713854 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.713987 kubelet[2453]: W0812 23:56:47.713941 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.714088 kubelet[2453]: E0812 23:56:47.714064 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.714458 kubelet[2453]: E0812 23:56:47.714351 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.714458 kubelet[2453]: W0812 23:56:47.714365 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.714676 kubelet[2453]: E0812 23:56:47.714583 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.714676 kubelet[2453]: W0812 23:56:47.714597 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.714797 kubelet[2453]: E0812 23:56:47.714785 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.714900 kubelet[2453]: W0812 23:56:47.714887 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.715030 kubelet[2453]: E0812 23:56:47.714793 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.715094 kubelet[2453]: E0812 23:56:47.715026 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.715124 kubelet[2453]: E0812 23:56:47.714781 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.715685 kubelet[2453]: E0812 23:56:47.715661 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.716122 kubelet[2453]: W0812 23:56:47.715905 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.716122 kubelet[2453]: E0812 23:56:47.715967 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.716683 kubelet[2453]: E0812 23:56:47.716536 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.716683 kubelet[2453]: W0812 23:56:47.716558 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.716683 kubelet[2453]: E0812 23:56:47.716626 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.716875 kubelet[2453]: E0812 23:56:47.716858 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.716934 kubelet[2453]: W0812 23:56:47.716919 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.717263 kubelet[2453]: E0812 23:56:47.717247 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.717357 kubelet[2453]: W0812 23:56:47.717340 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.717475 kubelet[2453]: E0812 23:56:47.717447 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.717513 kubelet[2453]: E0812 23:56:47.717467 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.717767 kubelet[2453]: E0812 23:56:47.717751 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.717982 kubelet[2453]: W0812 23:56:47.717832 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.717982 kubelet[2453]: E0812 23:56:47.717890 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.718150 kubelet[2453]: E0812 23:56:47.718126 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.718396 kubelet[2453]: W0812 23:56:47.718373 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.718504 kubelet[2453]: E0812 23:56:47.718488 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:47.731130 kubelet[2453]: E0812 23:56:47.731020 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:47.731130 kubelet[2453]: W0812 23:56:47.731044 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:47.731130 kubelet[2453]: E0812 23:56:47.731065 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:48.637647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331784570.mount: Deactivated successfully. Aug 12 23:56:49.175705 kubelet[2453]: E0812 23:56:49.175660 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w79sn" podUID="c0354a12-b391-42fd-af3e-1ce2798bd729" Aug 12 23:56:49.187349 containerd[1427]: time="2025-08-12T23:56:49.187239932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:49.188188 containerd[1427]: time="2025-08-12T23:56:49.187832772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 12 23:56:49.189078 containerd[1427]: time="2025-08-12T23:56:49.189043812Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:49.191464 containerd[1427]: time="2025-08-12T23:56:49.191391732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:49.192254 containerd[1427]: time="2025-08-12T23:56:49.192217852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.780406403s" Aug 12 23:56:49.192302 containerd[1427]: time="2025-08-12T23:56:49.192261612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 12 23:56:49.195241 containerd[1427]: time="2025-08-12T23:56:49.194987773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 12 23:56:49.212403 containerd[1427]: time="2025-08-12T23:56:49.212341816Z" level=info msg="CreateContainer within sandbox \"994e7ebfe5b517d6b8029ef879e5b5bb10c49488abe21d952ad8a117904bf375\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 12 23:56:49.238824 containerd[1427]: time="2025-08-12T23:56:49.238768460Z" level=info msg="CreateContainer within sandbox \"994e7ebfe5b517d6b8029ef879e5b5bb10c49488abe21d952ad8a117904bf375\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8cab15a30b66ec8da6fa45b83f0c41110b0b52347081dc9921eecb4d1f195cb3\"" Aug 12 23:56:49.239539 containerd[1427]: time="2025-08-12T23:56:49.239376660Z" level=info msg="StartContainer for \"8cab15a30b66ec8da6fa45b83f0c41110b0b52347081dc9921eecb4d1f195cb3\"" Aug 12 23:56:49.275205 systemd[1]: Started cri-containerd-8cab15a30b66ec8da6fa45b83f0c41110b0b52347081dc9921eecb4d1f195cb3.scope - libcontainer container 8cab15a30b66ec8da6fa45b83f0c41110b0b52347081dc9921eecb4d1f195cb3. Aug 12 23:56:49.344452 containerd[1427]: time="2025-08-12T23:56:49.341571557Z" level=info msg="StartContainer for \"8cab15a30b66ec8da6fa45b83f0c41110b0b52347081dc9921eecb4d1f195cb3\" returns successfully" Aug 12 23:56:50.238750 kubelet[2453]: E0812 23:56:50.237468 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:50.316176 kubelet[2453]: E0812 23:56:50.316094 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.316176 kubelet[2453]: W0812 23:56:50.316133 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.316176 kubelet[2453]: E0812 23:56:50.316161 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.318014 kubelet[2453]: E0812 23:56:50.317548 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.318014 kubelet[2453]: W0812 23:56:50.317570 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.318014 kubelet[2453]: E0812 23:56:50.317639 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.318014 kubelet[2453]: E0812 23:56:50.317973 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.318014 kubelet[2453]: W0812 23:56:50.317985 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.318014 kubelet[2453]: E0812 23:56:50.317996 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.318248 kubelet[2453]: E0812 23:56:50.318211 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.318248 kubelet[2453]: W0812 23:56:50.318220 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.318248 kubelet[2453]: E0812 23:56:50.318228 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.318446 kubelet[2453]: E0812 23:56:50.318430 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.318446 kubelet[2453]: W0812 23:56:50.318444 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.318530 kubelet[2453]: E0812 23:56:50.318455 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.318654 kubelet[2453]: E0812 23:56:50.318642 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.318654 kubelet[2453]: W0812 23:56:50.318654 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.318705 kubelet[2453]: E0812 23:56:50.318663 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.318888 kubelet[2453]: E0812 23:56:50.318872 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.318926 kubelet[2453]: W0812 23:56:50.318882 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.319104 kubelet[2453]: E0812 23:56:50.318940 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.319231 kubelet[2453]: E0812 23:56:50.319215 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.319231 kubelet[2453]: W0812 23:56:50.319228 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.319403 kubelet[2453]: E0812 23:56:50.319238 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.320023 kubelet[2453]: E0812 23:56:50.319712 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.320023 kubelet[2453]: W0812 23:56:50.320026 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.320150 kubelet[2453]: E0812 23:56:50.320043 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.320825 kubelet[2453]: E0812 23:56:50.320800 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.320825 kubelet[2453]: W0812 23:56:50.320819 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.321068 kubelet[2453]: E0812 23:56:50.320842 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.321107 kubelet[2453]: E0812 23:56:50.321096 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.321135 kubelet[2453]: W0812 23:56:50.321107 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.321135 kubelet[2453]: E0812 23:56:50.321126 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.321552 kubelet[2453]: E0812 23:56:50.321367 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.321552 kubelet[2453]: W0812 23:56:50.321384 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.321552 kubelet[2453]: E0812 23:56:50.321447 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.322057 kubelet[2453]: E0812 23:56:50.321733 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.322057 kubelet[2453]: W0812 23:56:50.321748 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.322057 kubelet[2453]: E0812 23:56:50.321758 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.322057 kubelet[2453]: E0812 23:56:50.322030 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.322057 kubelet[2453]: W0812 23:56:50.322040 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.322057 kubelet[2453]: E0812 23:56:50.322049 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.322352 kubelet[2453]: E0812 23:56:50.322255 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.322352 kubelet[2453]: W0812 23:56:50.322264 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.322352 kubelet[2453]: E0812 23:56:50.322273 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.333835 kubelet[2453]: E0812 23:56:50.333783 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.333835 kubelet[2453]: W0812 23:56:50.333809 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.333835 kubelet[2453]: E0812 23:56:50.333842 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.334120 kubelet[2453]: E0812 23:56:50.334100 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.334120 kubelet[2453]: W0812 23:56:50.334110 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.334206 kubelet[2453]: E0812 23:56:50.334125 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.334786 kubelet[2453]: E0812 23:56:50.334687 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.334786 kubelet[2453]: W0812 23:56:50.334765 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.334901 kubelet[2453]: E0812 23:56:50.334814 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.335460 kubelet[2453]: E0812 23:56:50.335352 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.335460 kubelet[2453]: W0812 23:56:50.335441 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.335583 kubelet[2453]: E0812 23:56:50.335556 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.337171 kubelet[2453]: E0812 23:56:50.335881 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.337171 kubelet[2453]: W0812 23:56:50.335893 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.337171 kubelet[2453]: E0812 23:56:50.335940 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.337171 kubelet[2453]: E0812 23:56:50.336088 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.337171 kubelet[2453]: W0812 23:56:50.336097 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.337171 kubelet[2453]: E0812 23:56:50.336162 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.337171 kubelet[2453]: E0812 23:56:50.336242 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.337171 kubelet[2453]: W0812 23:56:50.336251 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.337171 kubelet[2453]: E0812 23:56:50.336280 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.337171 kubelet[2453]: E0812 23:56:50.336490 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.337476 kubelet[2453]: W0812 23:56:50.336500 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.337476 kubelet[2453]: E0812 23:56:50.336517 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.337733 kubelet[2453]: E0812 23:56:50.337707 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.337815 kubelet[2453]: W0812 23:56:50.337734 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.337988 kubelet[2453]: E0812 23:56:50.337883 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.338336 kubelet[2453]: E0812 23:56:50.338178 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.338336 kubelet[2453]: W0812 23:56:50.338200 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.338336 kubelet[2453]: E0812 23:56:50.338219 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.338964 kubelet[2453]: E0812 23:56:50.338926 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.339018 kubelet[2453]: W0812 23:56:50.338974 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.339655 kubelet[2453]: E0812 23:56:50.339631 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.340722 kubelet[2453]: E0812 23:56:50.340694 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.340722 kubelet[2453]: W0812 23:56:50.340715 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.340844 kubelet[2453]: E0812 23:56:50.340807 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.341119 kubelet[2453]: E0812 23:56:50.341100 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.341119 kubelet[2453]: W0812 23:56:50.341117 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.341316 kubelet[2453]: E0812 23:56:50.341218 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.341425 kubelet[2453]: E0812 23:56:50.341370 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.341425 kubelet[2453]: W0812 23:56:50.341380 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.341425 kubelet[2453]: E0812 23:56:50.341398 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.342371 kubelet[2453]: E0812 23:56:50.342231 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.342371 kubelet[2453]: W0812 23:56:50.342254 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.342371 kubelet[2453]: E0812 23:56:50.342279 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.342813 kubelet[2453]: E0812 23:56:50.342669 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.342813 kubelet[2453]: W0812 23:56:50.342685 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.342813 kubelet[2453]: E0812 23:56:50.342800 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.343874 kubelet[2453]: E0812 23:56:50.343800 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.343874 kubelet[2453]: W0812 23:56:50.343826 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.343874 kubelet[2453]: E0812 23:56:50.343850 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.344130 kubelet[2453]: E0812 23:56:50.344114 2453 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:56:50.344130 kubelet[2453]: W0812 23:56:50.344129 2453 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:56:50.344240 kubelet[2453]: E0812 23:56:50.344140 2453 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:56:50.363141 containerd[1427]: time="2025-08-12T23:56:50.363064605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:50.365137 containerd[1427]: time="2025-08-12T23:56:50.365083365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 12 23:56:50.366839 containerd[1427]: time="2025-08-12T23:56:50.366778046Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:50.372979 containerd[1427]: time="2025-08-12T23:56:50.372838287Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:50.373706 containerd[1427]: time="2025-08-12T23:56:50.373585727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.178537394s" Aug 12 23:56:50.373706 containerd[1427]: time="2025-08-12T23:56:50.373628647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 12 23:56:50.376069 containerd[1427]: time="2025-08-12T23:56:50.375985287Z" level=info msg="CreateContainer within sandbox \"7c0d7a7b074d28164f4d616ff90e32db2dd0be523b0162bdff414cbdf3def042\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 12 23:56:50.399094 containerd[1427]: time="2025-08-12T23:56:50.399022251Z" level=info msg="CreateContainer within sandbox \"7c0d7a7b074d28164f4d616ff90e32db2dd0be523b0162bdff414cbdf3def042\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"49a01c7e39d4de9558feb5d8291fd81f53069ac52eab60908d5317380fb04294\"" Aug 12 23:56:50.399939 containerd[1427]: time="2025-08-12T23:56:50.399898931Z" level=info msg="StartContainer for \"49a01c7e39d4de9558feb5d8291fd81f53069ac52eab60908d5317380fb04294\"" Aug 12 23:56:50.436639 systemd[1]: run-containerd-runc-k8s.io-49a01c7e39d4de9558feb5d8291fd81f53069ac52eab60908d5317380fb04294-runc.5zRreN.mount: Deactivated successfully. Aug 12 23:56:50.449182 systemd[1]: Started cri-containerd-49a01c7e39d4de9558feb5d8291fd81f53069ac52eab60908d5317380fb04294.scope - libcontainer container 49a01c7e39d4de9558feb5d8291fd81f53069ac52eab60908d5317380fb04294. Aug 12 23:56:50.515366 containerd[1427]: time="2025-08-12T23:56:50.515136749Z" level=info msg="StartContainer for \"49a01c7e39d4de9558feb5d8291fd81f53069ac52eab60908d5317380fb04294\" returns successfully" Aug 12 23:56:50.606103 systemd[1]: cri-containerd-49a01c7e39d4de9558feb5d8291fd81f53069ac52eab60908d5317380fb04294.scope: Deactivated successfully. Aug 12 23:56:50.850053 containerd[1427]: time="2025-08-12T23:56:50.839038640Z" level=info msg="shim disconnected" id=49a01c7e39d4de9558feb5d8291fd81f53069ac52eab60908d5317380fb04294 namespace=k8s.io Aug 12 23:56:50.850053 containerd[1427]: time="2025-08-12T23:56:50.849966042Z" level=warning msg="cleaning up after shim disconnected" id=49a01c7e39d4de9558feb5d8291fd81f53069ac52eab60908d5317380fb04294 namespace=k8s.io Aug 12 23:56:50.850053 containerd[1427]: time="2025-08-12T23:56:50.849985922Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:51.175874 kubelet[2453]: E0812 23:56:51.175813 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w79sn" podUID="c0354a12-b391-42fd-af3e-1ce2798bd729" Aug 12 23:56:51.246133 kubelet[2453]: I0812 23:56:51.245682 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:56:51.246133 kubelet[2453]: E0812 23:56:51.246129 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:51.247470 containerd[1427]: time="2025-08-12T23:56:51.247391742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 12 23:56:51.270488 kubelet[2453]: I0812 23:56:51.270404 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6d6884fc69-pls5t" podStartSLOduration=3.488456822 podStartE2EDuration="5.270387185s" podCreationTimestamp="2025-08-12 23:56:46 +0000 UTC" firstStartedPulling="2025-08-12 23:56:47.411055409 +0000 UTC m=+19.339967647" lastFinishedPulling="2025-08-12 23:56:49.192985772 +0000 UTC m=+21.121898010" observedRunningTime="2025-08-12 23:56:50.255076868 +0000 UTC m=+22.183989106" watchObservedRunningTime="2025-08-12 23:56:51.270387185 +0000 UTC m=+23.199299423" Aug 12 23:56:51.389272 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49a01c7e39d4de9558feb5d8291fd81f53069ac52eab60908d5317380fb04294-rootfs.mount: Deactivated successfully. Aug 12 23:56:53.176276 kubelet[2453]: E0812 23:56:53.176220 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w79sn" podUID="c0354a12-b391-42fd-af3e-1ce2798bd729" Aug 12 23:56:53.736988 containerd[1427]: time="2025-08-12T23:56:53.736863767Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:53.736988 containerd[1427]: time="2025-08-12T23:56:53.729387446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 12 23:56:53.736988 containerd[1427]: time="2025-08-12T23:56:53.733703926Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.486256864s" Aug 12 23:56:53.737523 containerd[1427]: time="2025-08-12T23:56:53.737001367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 12 23:56:53.739313 containerd[1427]: time="2025-08-12T23:56:53.738173327Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:53.739313 containerd[1427]: time="2025-08-12T23:56:53.738871407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:53.742972 containerd[1427]: time="2025-08-12T23:56:53.741472567Z" level=info msg="CreateContainer within sandbox \"7c0d7a7b074d28164f4d616ff90e32db2dd0be523b0162bdff414cbdf3def042\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 12 23:56:53.981918 containerd[1427]: time="2025-08-12T23:56:53.981853919Z" level=info msg="CreateContainer within sandbox \"7c0d7a7b074d28164f4d616ff90e32db2dd0be523b0162bdff414cbdf3def042\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e4004323c040e04ec676f08c326d12cf83558ebc7f77cf9f64437dd0c53216f6\"" Aug 12 23:56:53.982658 containerd[1427]: time="2025-08-12T23:56:53.982622279Z" level=info msg="StartContainer for \"e4004323c040e04ec676f08c326d12cf83558ebc7f77cf9f64437dd0c53216f6\"" Aug 12 23:56:54.016189 systemd[1]: Started cri-containerd-e4004323c040e04ec676f08c326d12cf83558ebc7f77cf9f64437dd0c53216f6.scope - libcontainer container e4004323c040e04ec676f08c326d12cf83558ebc7f77cf9f64437dd0c53216f6. Aug 12 23:56:54.054843 containerd[1427]: time="2025-08-12T23:56:54.054792768Z" level=info msg="StartContainer for \"e4004323c040e04ec676f08c326d12cf83558ebc7f77cf9f64437dd0c53216f6\" returns successfully" Aug 12 23:56:54.687371 systemd[1]: cri-containerd-e4004323c040e04ec676f08c326d12cf83558ebc7f77cf9f64437dd0c53216f6.scope: Deactivated successfully. Aug 12 23:56:54.709931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4004323c040e04ec676f08c326d12cf83558ebc7f77cf9f64437dd0c53216f6-rootfs.mount: Deactivated successfully. Aug 12 23:56:54.713917 kubelet[2453]: I0812 23:56:54.713831 2453 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 12 23:56:54.717756 containerd[1427]: time="2025-08-12T23:56:54.716736408Z" level=info msg="shim disconnected" id=e4004323c040e04ec676f08c326d12cf83558ebc7f77cf9f64437dd0c53216f6 namespace=k8s.io Aug 12 23:56:54.717756 containerd[1427]: time="2025-08-12T23:56:54.717749488Z" level=warning msg="cleaning up after shim disconnected" id=e4004323c040e04ec676f08c326d12cf83558ebc7f77cf9f64437dd0c53216f6 namespace=k8s.io Aug 12 23:56:54.717756 containerd[1427]: time="2025-08-12T23:56:54.717762208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 12 23:56:54.801453 systemd[1]: Created slice kubepods-besteffort-podd0d70758_a4a3_42e9_8e23_756a91ca0f83.slice - libcontainer container kubepods-besteffort-podd0d70758_a4a3_42e9_8e23_756a91ca0f83.slice. Aug 12 23:56:54.809065 systemd[1]: Created slice kubepods-burstable-podb91aefd7_fa4c_469b_969b_9459b2f96cc9.slice - libcontainer container kubepods-burstable-podb91aefd7_fa4c_469b_969b_9459b2f96cc9.slice. Aug 12 23:56:54.818775 systemd[1]: Created slice kubepods-burstable-podc162d718_8d24_407d_9e4c_2cf3d4f42ab4.slice - libcontainer container kubepods-burstable-podc162d718_8d24_407d_9e4c_2cf3d4f42ab4.slice. Aug 12 23:56:54.825507 systemd[1]: Created slice kubepods-besteffort-pod2291631d_1fc3_422f_a756_efdd85b1d503.slice - libcontainer container kubepods-besteffort-pod2291631d_1fc3_422f_a756_efdd85b1d503.slice. Aug 12 23:56:54.834630 systemd[1]: Created slice kubepods-besteffort-pod37a2d288_1878_44b7_b193_9a00f02f16f9.slice - libcontainer container kubepods-besteffort-pod37a2d288_1878_44b7_b193_9a00f02f16f9.slice. Aug 12 23:56:54.841845 systemd[1]: Created slice kubepods-besteffort-podc155b7b0_f2fb_45c9_ae9b_9acab303dcd1.slice - libcontainer container kubepods-besteffort-podc155b7b0_f2fb_45c9_ae9b_9acab303dcd1.slice. Aug 12 23:56:54.848475 systemd[1]: Created slice kubepods-besteffort-podad64c6a3_30ab_4a17_9b28_e049d25c1e5b.slice - libcontainer container kubepods-besteffort-podad64c6a3_30ab_4a17_9b28_e049d25c1e5b.slice. Aug 12 23:56:54.899764 kubelet[2453]: I0812 23:56:54.899616 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2291631d-1fc3-422f-a756-efdd85b1d503-calico-apiserver-certs\") pod \"calico-apiserver-7cfd595b89-s4blw\" (UID: \"2291631d-1fc3-422f-a756-efdd85b1d503\") " pod="calico-apiserver/calico-apiserver-7cfd595b89-s4blw" Aug 12 23:56:54.899764 kubelet[2453]: I0812 23:56:54.899770 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99f2v\" (UniqueName: \"kubernetes.io/projected/c162d718-8d24-407d-9e4c-2cf3d4f42ab4-kube-api-access-99f2v\") pod \"coredns-7c65d6cfc9-zvjr6\" (UID: \"c162d718-8d24-407d-9e4c-2cf3d4f42ab4\") " pod="kube-system/coredns-7c65d6cfc9-zvjr6" Aug 12 23:56:54.899969 kubelet[2453]: I0812 23:56:54.899793 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a2d288-1878-44b7-b193-9a00f02f16f9-config\") pod \"goldmane-58fd7646b9-j5rs8\" (UID: \"37a2d288-1878-44b7-b193-9a00f02f16f9\") " pod="calico-system/goldmane-58fd7646b9-j5rs8" Aug 12 23:56:54.899969 kubelet[2453]: I0812 23:56:54.899814 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad64c6a3-30ab-4a17-9b28-e049d25c1e5b-tigera-ca-bundle\") pod \"calico-kube-controllers-597d6cb8d8-g8nzb\" (UID: \"ad64c6a3-30ab-4a17-9b28-e049d25c1e5b\") " pod="calico-system/calico-kube-controllers-597d6cb8d8-g8nzb" Aug 12 23:56:54.899969 kubelet[2453]: I0812 23:56:54.899833 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b91aefd7-fa4c-469b-969b-9459b2f96cc9-config-volume\") pod \"coredns-7c65d6cfc9-777z4\" (UID: \"b91aefd7-fa4c-469b-969b-9459b2f96cc9\") " pod="kube-system/coredns-7c65d6cfc9-777z4" Aug 12 23:56:54.899969 kubelet[2453]: I0812 23:56:54.899856 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d0d70758-a4a3-42e9-8e23-756a91ca0f83-calico-apiserver-certs\") pod \"calico-apiserver-7cfd595b89-dfkdk\" (UID: \"d0d70758-a4a3-42e9-8e23-756a91ca0f83\") " pod="calico-apiserver/calico-apiserver-7cfd595b89-dfkdk" Aug 12 23:56:54.899969 kubelet[2453]: I0812 23:56:54.899875 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjhs9\" (UniqueName: \"kubernetes.io/projected/b91aefd7-fa4c-469b-969b-9459b2f96cc9-kube-api-access-gjhs9\") pod \"coredns-7c65d6cfc9-777z4\" (UID: \"b91aefd7-fa4c-469b-969b-9459b2f96cc9\") " pod="kube-system/coredns-7c65d6cfc9-777z4" Aug 12 23:56:54.900130 kubelet[2453]: I0812 23:56:54.899890 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-whisker-ca-bundle\") pod \"whisker-558b58668-tz428\" (UID: \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\") " pod="calico-system/whisker-558b58668-tz428" Aug 12 23:56:54.900130 kubelet[2453]: I0812 23:56:54.899910 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/37a2d288-1878-44b7-b193-9a00f02f16f9-goldmane-key-pair\") pod \"goldmane-58fd7646b9-j5rs8\" (UID: \"37a2d288-1878-44b7-b193-9a00f02f16f9\") " pod="calico-system/goldmane-58fd7646b9-j5rs8" Aug 12 23:56:54.900130 kubelet[2453]: I0812 23:56:54.899931 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37a2d288-1878-44b7-b193-9a00f02f16f9-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-j5rs8\" (UID: \"37a2d288-1878-44b7-b193-9a00f02f16f9\") " pod="calico-system/goldmane-58fd7646b9-j5rs8" Aug 12 23:56:54.900130 kubelet[2453]: I0812 23:56:54.899963 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnr87\" (UniqueName: \"kubernetes.io/projected/37a2d288-1878-44b7-b193-9a00f02f16f9-kube-api-access-cnr87\") pod \"goldmane-58fd7646b9-j5rs8\" (UID: \"37a2d288-1878-44b7-b193-9a00f02f16f9\") " pod="calico-system/goldmane-58fd7646b9-j5rs8" Aug 12 23:56:54.900130 kubelet[2453]: I0812 23:56:54.900078 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c162d718-8d24-407d-9e4c-2cf3d4f42ab4-config-volume\") pod \"coredns-7c65d6cfc9-zvjr6\" (UID: \"c162d718-8d24-407d-9e4c-2cf3d4f42ab4\") " pod="kube-system/coredns-7c65d6cfc9-zvjr6" Aug 12 23:56:54.900297 kubelet[2453]: I0812 23:56:54.900103 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q5qq\" (UniqueName: \"kubernetes.io/projected/d0d70758-a4a3-42e9-8e23-756a91ca0f83-kube-api-access-7q5qq\") pod \"calico-apiserver-7cfd595b89-dfkdk\" (UID: \"d0d70758-a4a3-42e9-8e23-756a91ca0f83\") " pod="calico-apiserver/calico-apiserver-7cfd595b89-dfkdk" Aug 12 23:56:54.900297 kubelet[2453]: I0812 23:56:54.900121 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-whisker-backend-key-pair\") pod \"whisker-558b58668-tz428\" (UID: \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\") " pod="calico-system/whisker-558b58668-tz428" Aug 12 23:56:54.900297 kubelet[2453]: I0812 23:56:54.900141 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zmv9\" (UniqueName: \"kubernetes.io/projected/ad64c6a3-30ab-4a17-9b28-e049d25c1e5b-kube-api-access-7zmv9\") pod \"calico-kube-controllers-597d6cb8d8-g8nzb\" (UID: \"ad64c6a3-30ab-4a17-9b28-e049d25c1e5b\") " pod="calico-system/calico-kube-controllers-597d6cb8d8-g8nzb" Aug 12 23:56:54.900297 kubelet[2453]: I0812 23:56:54.900164 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4hwx\" (UniqueName: \"kubernetes.io/projected/2291631d-1fc3-422f-a756-efdd85b1d503-kube-api-access-s4hwx\") pod \"calico-apiserver-7cfd595b89-s4blw\" (UID: \"2291631d-1fc3-422f-a756-efdd85b1d503\") " pod="calico-apiserver/calico-apiserver-7cfd595b89-s4blw" Aug 12 23:56:54.900297 kubelet[2453]: I0812 23:56:54.900182 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98g5d\" (UniqueName: \"kubernetes.io/projected/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-kube-api-access-98g5d\") pod \"whisker-558b58668-tz428\" (UID: \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\") " pod="calico-system/whisker-558b58668-tz428" Aug 12 23:56:55.107032 containerd[1427]: time="2025-08-12T23:56:55.106320775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd595b89-dfkdk,Uid:d0d70758-a4a3-42e9-8e23-756a91ca0f83,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:56:55.117649 kubelet[2453]: E0812 23:56:55.117602 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:55.118185 containerd[1427]: time="2025-08-12T23:56:55.118138856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-777z4,Uid:b91aefd7-fa4c-469b-969b-9459b2f96cc9,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:55.122008 kubelet[2453]: E0812 23:56:55.121779 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:56:55.122490 containerd[1427]: time="2025-08-12T23:56:55.122283176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zvjr6,Uid:c162d718-8d24-407d-9e4c-2cf3d4f42ab4,Namespace:kube-system,Attempt:0,}" Aug 12 23:56:55.131849 containerd[1427]: time="2025-08-12T23:56:55.131773738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd595b89-s4blw,Uid:2291631d-1fc3-422f-a756-efdd85b1d503,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:56:55.138741 containerd[1427]: time="2025-08-12T23:56:55.138688698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-j5rs8,Uid:37a2d288-1878-44b7-b193-9a00f02f16f9,Namespace:calico-system,Attempt:0,}" Aug 12 23:56:55.146549 containerd[1427]: time="2025-08-12T23:56:55.146491779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558b58668-tz428,Uid:c155b7b0-f2fb-45c9-ae9b-9acab303dcd1,Namespace:calico-system,Attempt:0,}" Aug 12 23:56:55.157331 containerd[1427]: time="2025-08-12T23:56:55.152681940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597d6cb8d8-g8nzb,Uid:ad64c6a3-30ab-4a17-9b28-e049d25c1e5b,Namespace:calico-system,Attempt:0,}" Aug 12 23:56:55.188678 systemd[1]: Created slice kubepods-besteffort-podc0354a12_b391_42fd_af3e_1ce2798bd729.slice - libcontainer container kubepods-besteffort-podc0354a12_b391_42fd_af3e_1ce2798bd729.slice. Aug 12 23:56:55.191697 containerd[1427]: time="2025-08-12T23:56:55.191644544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w79sn,Uid:c0354a12-b391-42fd-af3e-1ce2798bd729,Namespace:calico-system,Attempt:0,}" Aug 12 23:56:55.287593 containerd[1427]: time="2025-08-12T23:56:55.287454595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 12 23:56:55.895892 containerd[1427]: time="2025-08-12T23:56:55.895768105Z" level=error msg="Failed to destroy network for sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.905818 containerd[1427]: time="2025-08-12T23:56:55.905733586Z" level=error msg="encountered an error cleaning up failed sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.906008 containerd[1427]: time="2025-08-12T23:56:55.905835266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zvjr6,Uid:c162d718-8d24-407d-9e4c-2cf3d4f42ab4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.908257 kubelet[2453]: E0812 23:56:55.908191 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.912430 kubelet[2453]: E0812 23:56:55.912349 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zvjr6" Aug 12 23:56:55.912430 kubelet[2453]: E0812 23:56:55.912421 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-zvjr6" Aug 12 23:56:55.912812 kubelet[2453]: E0812 23:56:55.912486 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-zvjr6_kube-system(c162d718-8d24-407d-9e4c-2cf3d4f42ab4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-zvjr6_kube-system(c162d718-8d24-407d-9e4c-2cf3d4f42ab4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zvjr6" podUID="c162d718-8d24-407d-9e4c-2cf3d4f42ab4" Aug 12 23:56:55.917909 containerd[1427]: time="2025-08-12T23:56:55.917853427Z" level=error msg="Failed to destroy network for sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.919244 containerd[1427]: time="2025-08-12T23:56:55.919205987Z" level=error msg="encountered an error cleaning up failed sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.919487 containerd[1427]: time="2025-08-12T23:56:55.919462267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-777z4,Uid:b91aefd7-fa4c-469b-969b-9459b2f96cc9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.919944 kubelet[2453]: E0812 23:56:55.919905 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.920282 kubelet[2453]: E0812 23:56:55.920136 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-777z4" Aug 12 23:56:55.920282 kubelet[2453]: E0812 23:56:55.920164 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-777z4" Aug 12 23:56:55.920282 kubelet[2453]: E0812 23:56:55.920227 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-777z4_kube-system(b91aefd7-fa4c-469b-969b-9459b2f96cc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-777z4_kube-system(b91aefd7-fa4c-469b-969b-9459b2f96cc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-777z4" podUID="b91aefd7-fa4c-469b-969b-9459b2f96cc9" Aug 12 23:56:55.921197 containerd[1427]: time="2025-08-12T23:56:55.921160108Z" level=error msg="Failed to destroy network for sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.924441 containerd[1427]: time="2025-08-12T23:56:55.924363188Z" level=error msg="encountered an error cleaning up failed sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.924686 containerd[1427]: time="2025-08-12T23:56:55.924447188Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd595b89-dfkdk,Uid:d0d70758-a4a3-42e9-8e23-756a91ca0f83,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.924747 kubelet[2453]: E0812 23:56:55.924667 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.924747 kubelet[2453]: E0812 23:56:55.924729 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd595b89-dfkdk" Aug 12 23:56:55.924886 kubelet[2453]: E0812 23:56:55.924748 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd595b89-dfkdk" Aug 12 23:56:55.924886 kubelet[2453]: E0812 23:56:55.924787 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cfd595b89-dfkdk_calico-apiserver(d0d70758-a4a3-42e9-8e23-756a91ca0f83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cfd595b89-dfkdk_calico-apiserver(d0d70758-a4a3-42e9-8e23-756a91ca0f83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cfd595b89-dfkdk" podUID="d0d70758-a4a3-42e9-8e23-756a91ca0f83" Aug 12 23:56:55.929997 containerd[1427]: time="2025-08-12T23:56:55.929871189Z" level=error msg="Failed to destroy network for sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.930615 containerd[1427]: time="2025-08-12T23:56:55.930568549Z" level=error msg="encountered an error cleaning up failed sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.930730 containerd[1427]: time="2025-08-12T23:56:55.930629749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd595b89-s4blw,Uid:2291631d-1fc3-422f-a756-efdd85b1d503,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.931047 kubelet[2453]: E0812 23:56:55.930852 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.931047 kubelet[2453]: E0812 23:56:55.930922 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd595b89-s4blw" Aug 12 23:56:55.931047 kubelet[2453]: E0812 23:56:55.930944 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cfd595b89-s4blw" Aug 12 23:56:55.931211 kubelet[2453]: E0812 23:56:55.931029 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cfd595b89-s4blw_calico-apiserver(2291631d-1fc3-422f-a756-efdd85b1d503)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cfd595b89-s4blw_calico-apiserver(2291631d-1fc3-422f-a756-efdd85b1d503)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cfd595b89-s4blw" podUID="2291631d-1fc3-422f-a756-efdd85b1d503" Aug 12 23:56:55.944387 containerd[1427]: time="2025-08-12T23:56:55.944311790Z" level=error msg="Failed to destroy network for sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.946062 containerd[1427]: time="2025-08-12T23:56:55.945912710Z" level=error msg="encountered an error cleaning up failed sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.946234 containerd[1427]: time="2025-08-12T23:56:55.946136070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-j5rs8,Uid:37a2d288-1878-44b7-b193-9a00f02f16f9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.946928 kubelet[2453]: E0812 23:56:55.946561 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.946928 kubelet[2453]: E0812 23:56:55.946787 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-j5rs8" Aug 12 23:56:55.946928 kubelet[2453]: E0812 23:56:55.946807 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-j5rs8" Aug 12 23:56:55.947385 kubelet[2453]: E0812 23:56:55.946876 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-j5rs8_calico-system(37a2d288-1878-44b7-b193-9a00f02f16f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-j5rs8_calico-system(37a2d288-1878-44b7-b193-9a00f02f16f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-j5rs8" podUID="37a2d288-1878-44b7-b193-9a00f02f16f9" Aug 12 23:56:55.952978 containerd[1427]: time="2025-08-12T23:56:55.952668631Z" level=error msg="Failed to destroy network for sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.953137 containerd[1427]: time="2025-08-12T23:56:55.953037391Z" level=error msg="encountered an error cleaning up failed sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.953137 containerd[1427]: time="2025-08-12T23:56:55.953099351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w79sn,Uid:c0354a12-b391-42fd-af3e-1ce2798bd729,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.953378 kubelet[2453]: E0812 23:56:55.953324 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.953455 kubelet[2453]: E0812 23:56:55.953410 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w79sn" Aug 12 23:56:55.953455 kubelet[2453]: E0812 23:56:55.953432 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w79sn" Aug 12 23:56:55.953531 kubelet[2453]: E0812 23:56:55.953470 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w79sn_calico-system(c0354a12-b391-42fd-af3e-1ce2798bd729)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w79sn_calico-system(c0354a12-b391-42fd-af3e-1ce2798bd729)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w79sn" podUID="c0354a12-b391-42fd-af3e-1ce2798bd729" Aug 12 23:56:55.969786 containerd[1427]: time="2025-08-12T23:56:55.969736553Z" level=error msg="Failed to destroy network for sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.970165 containerd[1427]: time="2025-08-12T23:56:55.970136233Z" level=error msg="encountered an error cleaning up failed sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.970238 containerd[1427]: time="2025-08-12T23:56:55.970195833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558b58668-tz428,Uid:c155b7b0-f2fb-45c9-ae9b-9acab303dcd1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.970710 kubelet[2453]: E0812 23:56:55.970437 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.970861 kubelet[2453]: E0812 23:56:55.970741 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-558b58668-tz428" Aug 12 23:56:55.970861 kubelet[2453]: E0812 23:56:55.970766 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-558b58668-tz428" Aug 12 23:56:55.970861 kubelet[2453]: E0812 23:56:55.970821 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-558b58668-tz428_calico-system(c155b7b0-f2fb-45c9-ae9b-9acab303dcd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-558b58668-tz428_calico-system(c155b7b0-f2fb-45c9-ae9b-9acab303dcd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-558b58668-tz428" podUID="c155b7b0-f2fb-45c9-ae9b-9acab303dcd1" Aug 12 23:56:55.974002 containerd[1427]: time="2025-08-12T23:56:55.973865034Z" level=error msg="Failed to destroy network for sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.974367 containerd[1427]: time="2025-08-12T23:56:55.974335114Z" level=error msg="encountered an error cleaning up failed sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.974429 containerd[1427]: time="2025-08-12T23:56:55.974408714Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597d6cb8d8-g8nzb,Uid:ad64c6a3-30ab-4a17-9b28-e049d25c1e5b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.974903 kubelet[2453]: E0812 23:56:55.974732 2453 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:55.974903 kubelet[2453]: E0812 23:56:55.974793 2453 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-597d6cb8d8-g8nzb" Aug 12 23:56:55.974903 kubelet[2453]: E0812 23:56:55.974811 2453 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-597d6cb8d8-g8nzb" Aug 12 23:56:55.975148 kubelet[2453]: E0812 23:56:55.974858 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-597d6cb8d8-g8nzb_calico-system(ad64c6a3-30ab-4a17-9b28-e049d25c1e5b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-597d6cb8d8-g8nzb_calico-system(ad64c6a3-30ab-4a17-9b28-e049d25c1e5b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-597d6cb8d8-g8nzb" podUID="ad64c6a3-30ab-4a17-9b28-e049d25c1e5b" Aug 12 23:56:56.286642 kubelet[2453]: I0812 23:56:56.286520 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:56:56.288442 containerd[1427]: time="2025-08-12T23:56:56.287710747Z" level=info msg="StopPodSandbox for \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\"" Aug 12 23:56:56.288442 containerd[1427]: time="2025-08-12T23:56:56.287893587Z" level=info msg="Ensure that sandbox 6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc in task-service has been cleanup successfully" Aug 12 23:56:56.289793 kubelet[2453]: I0812 23:56:56.288019 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:56:56.289866 containerd[1427]: time="2025-08-12T23:56:56.288514347Z" level=info msg="StopPodSandbox for \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\"" Aug 12 23:56:56.289866 containerd[1427]: time="2025-08-12T23:56:56.288660587Z" level=info msg="Ensure that sandbox 7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e in task-service has been cleanup successfully" Aug 12 23:56:56.292961 kubelet[2453]: I0812 23:56:56.292641 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:56:56.295677 containerd[1427]: time="2025-08-12T23:56:56.295235628Z" level=info msg="StopPodSandbox for \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\"" Aug 12 23:56:56.295677 containerd[1427]: time="2025-08-12T23:56:56.295432508Z" level=info msg="Ensure that sandbox 1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51 in task-service has been cleanup successfully" Aug 12 23:56:56.299937 kubelet[2453]: I0812 23:56:56.299907 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:56:56.302337 containerd[1427]: time="2025-08-12T23:56:56.300573189Z" level=info msg="StopPodSandbox for \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\"" Aug 12 23:56:56.302337 containerd[1427]: time="2025-08-12T23:56:56.300760589Z" level=info msg="Ensure that sandbox 406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a in task-service has been cleanup successfully" Aug 12 23:56:56.304527 kubelet[2453]: I0812 23:56:56.304491 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:56:56.309574 containerd[1427]: time="2025-08-12T23:56:56.309396910Z" level=info msg="StopPodSandbox for \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\"" Aug 12 23:56:56.309977 containerd[1427]: time="2025-08-12T23:56:56.309879950Z" level=info msg="Ensure that sandbox 70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e in task-service has been cleanup successfully" Aug 12 23:56:56.310711 kubelet[2453]: I0812 23:56:56.310257 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:56:56.311145 containerd[1427]: time="2025-08-12T23:56:56.311117350Z" level=info msg="StopPodSandbox for \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\"" Aug 12 23:56:56.313241 containerd[1427]: time="2025-08-12T23:56:56.313182310Z" level=info msg="Ensure that sandbox 704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513 in task-service has been cleanup successfully" Aug 12 23:56:56.313532 kubelet[2453]: I0812 23:56:56.313247 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:56:56.314719 containerd[1427]: time="2025-08-12T23:56:56.314162950Z" level=info msg="StopPodSandbox for \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\"" Aug 12 23:56:56.316700 kubelet[2453]: I0812 23:56:56.316662 2453 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:56:56.316941 containerd[1427]: time="2025-08-12T23:56:56.316887230Z" level=info msg="Ensure that sandbox 0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04 in task-service has been cleanup successfully" Aug 12 23:56:56.317791 containerd[1427]: time="2025-08-12T23:56:56.317752710Z" level=info msg="StopPodSandbox for \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\"" Aug 12 23:56:56.318125 containerd[1427]: time="2025-08-12T23:56:56.318101431Z" level=info msg="Ensure that sandbox c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788 in task-service has been cleanup successfully" Aug 12 23:56:56.415295 containerd[1427]: time="2025-08-12T23:56:56.415200921Z" level=error msg="StopPodSandbox for \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\" failed" error="failed to destroy network for sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:56.416996 kubelet[2453]: E0812 23:56:56.415740 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:56:56.416996 kubelet[2453]: E0812 23:56:56.415825 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc"} Aug 12 23:56:56.416996 kubelet[2453]: E0812 23:56:56.415902 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0d70758-a4a3-42e9-8e23-756a91ca0f83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:56:56.416996 kubelet[2453]: E0812 23:56:56.415928 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0d70758-a4a3-42e9-8e23-756a91ca0f83\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cfd595b89-dfkdk" podUID="d0d70758-a4a3-42e9-8e23-756a91ca0f83" Aug 12 23:56:56.428982 containerd[1427]: time="2025-08-12T23:56:56.428895242Z" level=error msg="StopPodSandbox for \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\" failed" error="failed to destroy network for sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:56.429256 kubelet[2453]: E0812 23:56:56.429209 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:56:56.429441 kubelet[2453]: E0812 23:56:56.429264 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e"} Aug 12 23:56:56.429441 kubelet[2453]: E0812 23:56:56.429321 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c162d718-8d24-407d-9e4c-2cf3d4f42ab4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:56:56.429441 kubelet[2453]: E0812 23:56:56.429347 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c162d718-8d24-407d-9e4c-2cf3d4f42ab4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-zvjr6" podUID="c162d718-8d24-407d-9e4c-2cf3d4f42ab4" Aug 12 23:56:56.440618 containerd[1427]: time="2025-08-12T23:56:56.440556084Z" level=error msg="StopPodSandbox for \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\" failed" error="failed to destroy network for sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:56.440861 kubelet[2453]: E0812 23:56:56.440803 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:56:56.441056 kubelet[2453]: E0812 23:56:56.440861 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51"} Aug 12 23:56:56.441056 kubelet[2453]: E0812 23:56:56.440896 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37a2d288-1878-44b7-b193-9a00f02f16f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:56:56.441056 kubelet[2453]: E0812 23:56:56.440932 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37a2d288-1878-44b7-b193-9a00f02f16f9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-j5rs8" podUID="37a2d288-1878-44b7-b193-9a00f02f16f9" Aug 12 23:56:56.446237 containerd[1427]: time="2025-08-12T23:56:56.446122324Z" level=error msg="StopPodSandbox for \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\" failed" error="failed to destroy network for sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:56.446672 kubelet[2453]: E0812 23:56:56.446624 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:56:56.446753 kubelet[2453]: E0812 23:56:56.446700 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513"} Aug 12 23:56:56.446837 kubelet[2453]: E0812 23:56:56.446740 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2291631d-1fc3-422f-a756-efdd85b1d503\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:56:56.446837 kubelet[2453]: E0812 23:56:56.446783 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2291631d-1fc3-422f-a756-efdd85b1d503\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cfd595b89-s4blw" podUID="2291631d-1fc3-422f-a756-efdd85b1d503" Aug 12 23:56:56.452222 containerd[1427]: time="2025-08-12T23:56:56.452163645Z" level=error msg="StopPodSandbox for \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\" failed" error="failed to destroy network for sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:56.452477 kubelet[2453]: E0812 23:56:56.452438 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:56:56.452565 kubelet[2453]: E0812 23:56:56.452496 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788"} Aug 12 23:56:56.452565 kubelet[2453]: E0812 23:56:56.452531 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b91aefd7-fa4c-469b-969b-9459b2f96cc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:56:56.452690 kubelet[2453]: E0812 23:56:56.452553 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b91aefd7-fa4c-469b-969b-9459b2f96cc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-777z4" podUID="b91aefd7-fa4c-469b-969b-9459b2f96cc9" Aug 12 23:56:56.458999 containerd[1427]: time="2025-08-12T23:56:56.458165245Z" level=error msg="StopPodSandbox for \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\" failed" error="failed to destroy network for sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:56.458999 containerd[1427]: time="2025-08-12T23:56:56.459096446Z" level=error msg="StopPodSandbox for \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\" failed" error="failed to destroy network for sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:56.459330 kubelet[2453]: E0812 23:56:56.458412 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:56:56.459330 kubelet[2453]: E0812 23:56:56.458466 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e"} Aug 12 23:56:56.459330 kubelet[2453]: E0812 23:56:56.458500 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:56:56.459330 kubelet[2453]: E0812 23:56:56.458522 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-558b58668-tz428" podUID="c155b7b0-f2fb-45c9-ae9b-9acab303dcd1" Aug 12 23:56:56.459581 containerd[1427]: time="2025-08-12T23:56:56.459120766Z" level=error msg="StopPodSandbox for \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\" failed" error="failed to destroy network for sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:56:56.459642 kubelet[2453]: E0812 23:56:56.459301 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:56:56.459642 kubelet[2453]: E0812 23:56:56.459350 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a"} Aug 12 23:56:56.459642 kubelet[2453]: E0812 23:56:56.459377 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad64c6a3-30ab-4a17-9b28-e049d25c1e5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:56:56.459642 kubelet[2453]: E0812 23:56:56.459395 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad64c6a3-30ab-4a17-9b28-e049d25c1e5b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-597d6cb8d8-g8nzb" podUID="ad64c6a3-30ab-4a17-9b28-e049d25c1e5b" Aug 12 23:56:56.459872 kubelet[2453]: E0812 23:56:56.459464 2453 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:56:56.459872 kubelet[2453]: E0812 23:56:56.459483 2453 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04"} Aug 12 23:56:56.459872 kubelet[2453]: E0812 23:56:56.459537 2453 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c0354a12-b391-42fd-af3e-1ce2798bd729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 12 23:56:56.459872 kubelet[2453]: E0812 23:56:56.459555 2453 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c0354a12-b391-42fd-af3e-1ce2798bd729\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w79sn" podUID="c0354a12-b391-42fd-af3e-1ce2798bd729" Aug 12 23:56:59.361220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount625405365.mount: Deactivated successfully. Aug 12 23:56:59.556671 containerd[1427]: time="2025-08-12T23:56:59.556600187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:59.557863 containerd[1427]: time="2025-08-12T23:56:59.557815347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 12 23:56:59.559842 containerd[1427]: time="2025-08-12T23:56:59.559804347Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:59.564749 containerd[1427]: time="2025-08-12T23:56:59.564701187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:56:59.565274 containerd[1427]: time="2025-08-12T23:56:59.565235787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.277698912s" Aug 12 23:56:59.565315 containerd[1427]: time="2025-08-12T23:56:59.565272547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 12 23:56:59.577046 containerd[1427]: time="2025-08-12T23:56:59.575644308Z" level=info msg="CreateContainer within sandbox \"7c0d7a7b074d28164f4d616ff90e32db2dd0be523b0162bdff414cbdf3def042\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 12 23:56:59.602371 containerd[1427]: time="2025-08-12T23:56:59.602308111Z" level=info msg="CreateContainer within sandbox \"7c0d7a7b074d28164f4d616ff90e32db2dd0be523b0162bdff414cbdf3def042\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c20199dc0820c5a35b19ea862f6502f88ba0c3514bbea8c00f3d63acce64a52f\"" Aug 12 23:56:59.602937 containerd[1427]: time="2025-08-12T23:56:59.602914391Z" level=info msg="StartContainer for \"c20199dc0820c5a35b19ea862f6502f88ba0c3514bbea8c00f3d63acce64a52f\"" Aug 12 23:56:59.662339 systemd[1]: Started cri-containerd-c20199dc0820c5a35b19ea862f6502f88ba0c3514bbea8c00f3d63acce64a52f.scope - libcontainer container c20199dc0820c5a35b19ea862f6502f88ba0c3514bbea8c00f3d63acce64a52f. Aug 12 23:56:59.711413 containerd[1427]: time="2025-08-12T23:56:59.704350960Z" level=info msg="StartContainer for \"c20199dc0820c5a35b19ea862f6502f88ba0c3514bbea8c00f3d63acce64a52f\" returns successfully" Aug 12 23:56:59.870890 kubelet[2453]: I0812 23:56:59.870836 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:56:59.871418 kubelet[2453]: E0812 23:56:59.871246 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:00.020311 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 12 23:57:00.020489 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 12 23:57:00.217898 containerd[1427]: time="2025-08-12T23:57:00.217848484Z" level=info msg="StopPodSandbox for \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\"" Aug 12 23:57:00.329856 kubelet[2453]: E0812 23:57:00.328027 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:00.431162 kubelet[2453]: I0812 23:57:00.426129 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p4ws9" podStartSLOduration=1.483818963 podStartE2EDuration="13.426105501s" podCreationTimestamp="2025-08-12 23:56:47 +0000 UTC" firstStartedPulling="2025-08-12 23:56:47.623740809 +0000 UTC m=+19.552653007" lastFinishedPulling="2025-08-12 23:56:59.566027307 +0000 UTC m=+31.494939545" observedRunningTime="2025-08-12 23:57:00.425398981 +0000 UTC m=+32.354311219" watchObservedRunningTime="2025-08-12 23:57:00.426105501 +0000 UTC m=+32.355017739" Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.425 [INFO][3759] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.427 [INFO][3759] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" iface="eth0" netns="/var/run/netns/cni-e580ad75-8608-a182-404f-cb943e80ecc1" Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.428 [INFO][3759] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" iface="eth0" netns="/var/run/netns/cni-e580ad75-8608-a182-404f-cb943e80ecc1" Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.429 [INFO][3759] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" iface="eth0" netns="/var/run/netns/cni-e580ad75-8608-a182-404f-cb943e80ecc1" Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.429 [INFO][3759] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.431 [INFO][3759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.622 [INFO][3770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" HandleID="k8s-pod-network.70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Workload="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.622 [INFO][3770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.623 [INFO][3770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.643 [WARNING][3770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" HandleID="k8s-pod-network.70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Workload="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.643 [INFO][3770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" HandleID="k8s-pod-network.70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Workload="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.644 [INFO][3770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:00.650405 containerd[1427]: 2025-08-12 23:57:00.647 [INFO][3759] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:00.651105 containerd[1427]: time="2025-08-12T23:57:00.650575719Z" level=info msg="TearDown network for sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\" successfully" Aug 12 23:57:00.651105 containerd[1427]: time="2025-08-12T23:57:00.650608639Z" level=info msg="StopPodSandbox for \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\" returns successfully" Aug 12 23:57:00.652928 systemd[1]: run-netns-cni\x2de580ad75\x2d8608\x2da182\x2d404f\x2dcb943e80ecc1.mount: Deactivated successfully. Aug 12 23:57:00.755611 kubelet[2453]: I0812 23:57:00.755554 2453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-whisker-ca-bundle\") pod \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\" (UID: \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\") " Aug 12 23:57:00.755611 kubelet[2453]: I0812 23:57:00.755605 2453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-whisker-backend-key-pair\") pod \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\" (UID: \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\") " Aug 12 23:57:00.755870 kubelet[2453]: I0812 23:57:00.755630 2453 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98g5d\" (UniqueName: \"kubernetes.io/projected/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-kube-api-access-98g5d\") pod \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\" (UID: \"c155b7b0-f2fb-45c9-ae9b-9acab303dcd1\") " Aug 12 23:57:00.767236 kubelet[2453]: I0812 23:57:00.766778 2453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c155b7b0-f2fb-45c9-ae9b-9acab303dcd1" (UID: "c155b7b0-f2fb-45c9-ae9b-9acab303dcd1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 12 23:57:00.783242 kubelet[2453]: I0812 23:57:00.783183 2453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c155b7b0-f2fb-45c9-ae9b-9acab303dcd1" (UID: "c155b7b0-f2fb-45c9-ae9b-9acab303dcd1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 12 23:57:00.783242 kubelet[2453]: I0812 23:57:00.783185 2453 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-kube-api-access-98g5d" (OuterVolumeSpecName: "kube-api-access-98g5d") pod "c155b7b0-f2fb-45c9-ae9b-9acab303dcd1" (UID: "c155b7b0-f2fb-45c9-ae9b-9acab303dcd1"). InnerVolumeSpecName "kube-api-access-98g5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 12 23:57:00.783327 systemd[1]: var-lib-kubelet-pods-c155b7b0\x2df2fb\x2d45c9\x2dae9b\x2d9acab303dcd1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 12 23:57:00.785852 systemd[1]: var-lib-kubelet-pods-c155b7b0\x2df2fb\x2d45c9\x2dae9b\x2d9acab303dcd1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d98g5d.mount: Deactivated successfully. Aug 12 23:57:00.856539 kubelet[2453]: I0812 23:57:00.856485 2453 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 12 23:57:00.856539 kubelet[2453]: I0812 23:57:00.856525 2453 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 12 23:57:00.856539 kubelet[2453]: I0812 23:57:00.856539 2453 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98g5d\" (UniqueName: \"kubernetes.io/projected/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1-kube-api-access-98g5d\") on node \"localhost\" DevicePath \"\"" Aug 12 23:57:01.336465 systemd[1]: Removed slice kubepods-besteffort-podc155b7b0_f2fb_45c9_ae9b_9acab303dcd1.slice - libcontainer container kubepods-besteffort-podc155b7b0_f2fb_45c9_ae9b_9acab303dcd1.slice. Aug 12 23:57:01.416457 systemd[1]: run-containerd-runc-k8s.io-c20199dc0820c5a35b19ea862f6502f88ba0c3514bbea8c00f3d63acce64a52f-runc.rXgnOJ.mount: Deactivated successfully. Aug 12 23:57:01.443141 systemd[1]: Created slice kubepods-besteffort-podc66f65da_c73f_40fc_a1de_b0371525074d.slice - libcontainer container kubepods-besteffort-podc66f65da_c73f_40fc_a1de_b0371525074d.slice. Aug 12 23:57:01.561510 kubelet[2453]: I0812 23:57:01.561443 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c66f65da-c73f-40fc-a1de-b0371525074d-whisker-backend-key-pair\") pod \"whisker-68575d79b-72rbd\" (UID: \"c66f65da-c73f-40fc-a1de-b0371525074d\") " pod="calico-system/whisker-68575d79b-72rbd" Aug 12 23:57:01.561510 kubelet[2453]: I0812 23:57:01.561501 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c66f65da-c73f-40fc-a1de-b0371525074d-whisker-ca-bundle\") pod \"whisker-68575d79b-72rbd\" (UID: \"c66f65da-c73f-40fc-a1de-b0371525074d\") " pod="calico-system/whisker-68575d79b-72rbd" Aug 12 23:57:01.561976 kubelet[2453]: I0812 23:57:01.561548 2453 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlzhp\" (UniqueName: \"kubernetes.io/projected/c66f65da-c73f-40fc-a1de-b0371525074d-kube-api-access-rlzhp\") pod \"whisker-68575d79b-72rbd\" (UID: \"c66f65da-c73f-40fc-a1de-b0371525074d\") " pod="calico-system/whisker-68575d79b-72rbd" Aug 12 23:57:01.746652 containerd[1427]: time="2025-08-12T23:57:01.746599406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68575d79b-72rbd,Uid:c66f65da-c73f-40fc-a1de-b0371525074d,Namespace:calico-system,Attempt:0,}" Aug 12 23:57:01.974986 kernel: bpftool[3965]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 12 23:57:02.019033 systemd-networkd[1365]: cali11cac698866: Link UP Aug 12 23:57:02.019684 systemd-networkd[1365]: cali11cac698866: Gained carrier Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.859 [INFO][3915] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.877 [INFO][3915] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--68575d79b--72rbd-eth0 whisker-68575d79b- calico-system c66f65da-c73f-40fc-a1de-b0371525074d 899 0 2025-08-12 23:57:01 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68575d79b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-68575d79b-72rbd eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali11cac698866 [] [] }} ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Namespace="calico-system" Pod="whisker-68575d79b-72rbd" WorkloadEndpoint="localhost-k8s-whisker--68575d79b--72rbd-" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.877 [INFO][3915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Namespace="calico-system" Pod="whisker-68575d79b-72rbd" WorkloadEndpoint="localhost-k8s-whisker--68575d79b--72rbd-eth0" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.915 [INFO][3947] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" HandleID="k8s-pod-network.0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Workload="localhost-k8s-whisker--68575d79b--72rbd-eth0" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.915 [INFO][3947] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" HandleID="k8s-pod-network.0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Workload="localhost-k8s-whisker--68575d79b--72rbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137530), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-68575d79b-72rbd", "timestamp":"2025-08-12 23:57:01.915003939 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.915 [INFO][3947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.915 [INFO][3947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.915 [INFO][3947] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.931 [INFO][3947] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" host="localhost" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.956 [INFO][3947] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.962 [INFO][3947] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.964 [INFO][3947] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.967 [INFO][3947] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.967 [INFO][3947] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" host="localhost" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.969 [INFO][3947] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506 Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:01.992 [INFO][3947] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" host="localhost" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:02.010 [INFO][3947] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" host="localhost" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:02.010 [INFO][3947] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" host="localhost" Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:02.010 [INFO][3947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:02.040495 containerd[1427]: 2025-08-12 23:57:02.010 [INFO][3947] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" HandleID="k8s-pod-network.0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Workload="localhost-k8s-whisker--68575d79b--72rbd-eth0" Aug 12 23:57:02.041318 containerd[1427]: 2025-08-12 23:57:02.012 [INFO][3915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Namespace="calico-system" Pod="whisker-68575d79b-72rbd" WorkloadEndpoint="localhost-k8s-whisker--68575d79b--72rbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68575d79b--72rbd-eth0", GenerateName:"whisker-68575d79b-", Namespace:"calico-system", SelfLink:"", UID:"c66f65da-c73f-40fc-a1de-b0371525074d", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68575d79b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-68575d79b-72rbd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali11cac698866", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:02.041318 containerd[1427]: 2025-08-12 23:57:02.012 [INFO][3915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Namespace="calico-system" Pod="whisker-68575d79b-72rbd" WorkloadEndpoint="localhost-k8s-whisker--68575d79b--72rbd-eth0" Aug 12 23:57:02.041318 containerd[1427]: 2025-08-12 23:57:02.012 [INFO][3915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11cac698866 ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Namespace="calico-system" Pod="whisker-68575d79b-72rbd" WorkloadEndpoint="localhost-k8s-whisker--68575d79b--72rbd-eth0" Aug 12 23:57:02.041318 containerd[1427]: 2025-08-12 23:57:02.022 [INFO][3915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Namespace="calico-system" Pod="whisker-68575d79b-72rbd" WorkloadEndpoint="localhost-k8s-whisker--68575d79b--72rbd-eth0" Aug 12 23:57:02.041318 containerd[1427]: 2025-08-12 23:57:02.024 [INFO][3915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Namespace="calico-system" Pod="whisker-68575d79b-72rbd" WorkloadEndpoint="localhost-k8s-whisker--68575d79b--72rbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--68575d79b--72rbd-eth0", GenerateName:"whisker-68575d79b-", Namespace:"calico-system", SelfLink:"", UID:"c66f65da-c73f-40fc-a1de-b0371525074d", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 57, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68575d79b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506", Pod:"whisker-68575d79b-72rbd", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali11cac698866", MAC:"d2:5a:be:5d:94:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:02.041318 containerd[1427]: 2025-08-12 23:57:02.036 [INFO][3915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506" Namespace="calico-system" Pod="whisker-68575d79b-72rbd" WorkloadEndpoint="localhost-k8s-whisker--68575d79b--72rbd-eth0" Aug 12 23:57:02.074059 containerd[1427]: time="2025-08-12T23:57:02.073877991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:02.074059 containerd[1427]: time="2025-08-12T23:57:02.074006911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:02.074059 containerd[1427]: time="2025-08-12T23:57:02.074019311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:02.075083 containerd[1427]: time="2025-08-12T23:57:02.074166311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:02.093133 systemd[1]: Started cri-containerd-0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506.scope - libcontainer container 0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506. Aug 12 23:57:02.112754 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:57:02.160250 containerd[1427]: time="2025-08-12T23:57:02.159740597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68575d79b-72rbd,Uid:c66f65da-c73f-40fc-a1de-b0371525074d,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506\"" Aug 12 23:57:02.160568 systemd-networkd[1365]: vxlan.calico: Link UP Aug 12 23:57:02.160573 systemd-networkd[1365]: vxlan.calico: Gained carrier Aug 12 23:57:02.164268 containerd[1427]: time="2025-08-12T23:57:02.164194237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 12 23:57:02.180029 kubelet[2453]: I0812 23:57:02.179576 2453 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c155b7b0-f2fb-45c9-ae9b-9acab303dcd1" path="/var/lib/kubelet/pods/c155b7b0-f2fb-45c9-ae9b-9acab303dcd1/volumes" Aug 12 23:57:03.359054 containerd[1427]: time="2025-08-12T23:57:03.359002282Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:03.359627 containerd[1427]: time="2025-08-12T23:57:03.359583163Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 12 23:57:03.361070 containerd[1427]: time="2025-08-12T23:57:03.361028923Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:03.363698 containerd[1427]: time="2025-08-12T23:57:03.363643203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:03.364550 containerd[1427]: time="2025-08-12T23:57:03.364508083Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.200160726s" Aug 12 23:57:03.364611 containerd[1427]: time="2025-08-12T23:57:03.364551723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 12 23:57:03.373701 containerd[1427]: time="2025-08-12T23:57:03.373622523Z" level=info msg="CreateContainer within sandbox \"0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 12 23:57:03.385296 containerd[1427]: time="2025-08-12T23:57:03.385237644Z" level=info msg="CreateContainer within sandbox \"0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7637561bc89376ae96124ab3e3b9b87aaf6d46ea89dd5cb572a89bc3615af59b\"" Aug 12 23:57:03.392743 containerd[1427]: time="2025-08-12T23:57:03.392698765Z" level=info msg="StartContainer for \"7637561bc89376ae96124ab3e3b9b87aaf6d46ea89dd5cb572a89bc3615af59b\"" Aug 12 23:57:03.427065 systemd-networkd[1365]: cali11cac698866: Gained IPv6LL Aug 12 23:57:03.436197 systemd[1]: Started cri-containerd-7637561bc89376ae96124ab3e3b9b87aaf6d46ea89dd5cb572a89bc3615af59b.scope - libcontainer container 7637561bc89376ae96124ab3e3b9b87aaf6d46ea89dd5cb572a89bc3615af59b. Aug 12 23:57:03.469155 containerd[1427]: time="2025-08-12T23:57:03.469071970Z" level=info msg="StartContainer for \"7637561bc89376ae96124ab3e3b9b87aaf6d46ea89dd5cb572a89bc3615af59b\" returns successfully" Aug 12 23:57:03.471624 containerd[1427]: time="2025-08-12T23:57:03.471589250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 12 23:57:03.937115 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Aug 12 23:57:05.012734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257648775.mount: Deactivated successfully. Aug 12 23:57:05.029793 containerd[1427]: time="2025-08-12T23:57:05.029741352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:05.030712 containerd[1427]: time="2025-08-12T23:57:05.030673712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 12 23:57:05.031458 containerd[1427]: time="2025-08-12T23:57:05.031428232Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:05.033631 containerd[1427]: time="2025-08-12T23:57:05.033597072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:05.034539 containerd[1427]: time="2025-08-12T23:57:05.034306992Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.562680742s" Aug 12 23:57:05.034539 containerd[1427]: time="2025-08-12T23:57:05.034353632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 12 23:57:05.036957 containerd[1427]: time="2025-08-12T23:57:05.036907792Z" level=info msg="CreateContainer within sandbox \"0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 12 23:57:05.047876 containerd[1427]: time="2025-08-12T23:57:05.047822353Z" level=info msg="CreateContainer within sandbox \"0c1568c0f302007bd0a828836a4a955fc56383c359a344477edede72e356f506\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c88b9d7027c8dea5114cfdfeebe471d82c9501c461a3593ae1b8b09967e7a5e0\"" Aug 12 23:57:05.048408 containerd[1427]: time="2025-08-12T23:57:05.048368033Z" level=info msg="StartContainer for \"c88b9d7027c8dea5114cfdfeebe471d82c9501c461a3593ae1b8b09967e7a5e0\"" Aug 12 23:57:05.088172 systemd[1]: Started cri-containerd-c88b9d7027c8dea5114cfdfeebe471d82c9501c461a3593ae1b8b09967e7a5e0.scope - libcontainer container c88b9d7027c8dea5114cfdfeebe471d82c9501c461a3593ae1b8b09967e7a5e0. Aug 12 23:57:05.179609 containerd[1427]: time="2025-08-12T23:57:05.179554481Z" level=info msg="StartContainer for \"c88b9d7027c8dea5114cfdfeebe471d82c9501c461a3593ae1b8b09967e7a5e0\" returns successfully" Aug 12 23:57:05.356478 kubelet[2453]: I0812 23:57:05.355317 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-68575d79b-72rbd" podStartSLOduration=1.482156536 podStartE2EDuration="4.355297291s" podCreationTimestamp="2025-08-12 23:57:01 +0000 UTC" firstStartedPulling="2025-08-12 23:57:02.162209157 +0000 UTC m=+34.091121395" lastFinishedPulling="2025-08-12 23:57:05.035349912 +0000 UTC m=+36.964262150" observedRunningTime="2025-08-12 23:57:05.354173691 +0000 UTC m=+37.283085969" watchObservedRunningTime="2025-08-12 23:57:05.355297291 +0000 UTC m=+37.284209529" Aug 12 23:57:08.177359 containerd[1427]: time="2025-08-12T23:57:08.177300367Z" level=info msg="StopPodSandbox for \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\"" Aug 12 23:57:08.177747 containerd[1427]: time="2025-08-12T23:57:08.177384687Z" level=info msg="StopPodSandbox for \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\"" Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.250 [INFO][4238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.251 [INFO][4238] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" iface="eth0" netns="/var/run/netns/cni-d109d62f-227a-9765-4ee0-fca3b75b916e" Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.251 [INFO][4238] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" iface="eth0" netns="/var/run/netns/cni-d109d62f-227a-9765-4ee0-fca3b75b916e" Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.252 [INFO][4238] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" iface="eth0" netns="/var/run/netns/cni-d109d62f-227a-9765-4ee0-fca3b75b916e" Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.252 [INFO][4238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.252 [INFO][4238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.282 [INFO][4255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" HandleID="k8s-pod-network.0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.282 [INFO][4255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.282 [INFO][4255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.293 [WARNING][4255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" HandleID="k8s-pod-network.0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.293 [INFO][4255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" HandleID="k8s-pod-network.0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.295 [INFO][4255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:08.299222 containerd[1427]: 2025-08-12 23:57:08.297 [INFO][4238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:08.299863 containerd[1427]: time="2025-08-12T23:57:08.299732173Z" level=info msg="TearDown network for sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\" successfully" Aug 12 23:57:08.299863 containerd[1427]: time="2025-08-12T23:57:08.299765973Z" level=info msg="StopPodSandbox for \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\" returns successfully" Aug 12 23:57:08.300475 containerd[1427]: time="2025-08-12T23:57:08.300441253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w79sn,Uid:c0354a12-b391-42fd-af3e-1ce2798bd729,Namespace:calico-system,Attempt:1,}" Aug 12 23:57:08.303945 systemd[1]: run-netns-cni\x2dd109d62f\x2d227a\x2d9765\x2d4ee0\x2dfca3b75b916e.mount: Deactivated successfully. Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.260 [INFO][4237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.260 [INFO][4237] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" iface="eth0" netns="/var/run/netns/cni-889c540b-2c5c-81ae-2a02-a9337002bd45" Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.260 [INFO][4237] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" iface="eth0" netns="/var/run/netns/cni-889c540b-2c5c-81ae-2a02-a9337002bd45" Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.261 [INFO][4237] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" iface="eth0" netns="/var/run/netns/cni-889c540b-2c5c-81ae-2a02-a9337002bd45" Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.261 [INFO][4237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.261 [INFO][4237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.301 [INFO][4261] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" HandleID="k8s-pod-network.6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.301 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.301 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.316 [WARNING][4261] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" HandleID="k8s-pod-network.6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.316 [INFO][4261] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" HandleID="k8s-pod-network.6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.323 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:08.330201 containerd[1427]: 2025-08-12 23:57:08.326 [INFO][4237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:08.330620 containerd[1427]: time="2025-08-12T23:57:08.330424734Z" level=info msg="TearDown network for sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\" successfully" Aug 12 23:57:08.330620 containerd[1427]: time="2025-08-12T23:57:08.330453894Z" level=info msg="StopPodSandbox for \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\" returns successfully" Aug 12 23:57:08.332383 systemd[1]: run-netns-cni\x2d889c540b\x2d2c5c\x2d81ae\x2d2a02\x2da9337002bd45.mount: Deactivated successfully. Aug 12 23:57:08.333105 containerd[1427]: time="2025-08-12T23:57:08.333063575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd595b89-dfkdk,Uid:d0d70758-a4a3-42e9-8e23-756a91ca0f83,Namespace:calico-apiserver,Attempt:1,}" Aug 12 23:57:08.482458 systemd-networkd[1365]: calibc86096231a: Link UP Aug 12 23:57:08.482641 systemd-networkd[1365]: calibc86096231a: Gained carrier Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.373 [INFO][4274] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w79sn-eth0 csi-node-driver- calico-system c0354a12-b391-42fd-af3e-1ce2798bd729 935 0 2025-08-12 23:56:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w79sn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibc86096231a [] [] }} ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Namespace="calico-system" Pod="csi-node-driver-w79sn" WorkloadEndpoint="localhost-k8s-csi--node--driver--w79sn-" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.374 [INFO][4274] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Namespace="calico-system" Pod="csi-node-driver-w79sn" WorkloadEndpoint="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.424 [INFO][4303] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" HandleID="k8s-pod-network.e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.424 [INFO][4303] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" HandleID="k8s-pod-network.e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w79sn", "timestamp":"2025-08-12 23:57:08.424352499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.424 [INFO][4303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.424 [INFO][4303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.424 [INFO][4303] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.435 [INFO][4303] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" host="localhost" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.440 [INFO][4303] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.448 [INFO][4303] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.451 [INFO][4303] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.457 [INFO][4303] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.457 [INFO][4303] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" host="localhost" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.461 [INFO][4303] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6 Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.465 [INFO][4303] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" host="localhost" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.474 [INFO][4303] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" host="localhost" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.474 [INFO][4303] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" host="localhost" Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.474 [INFO][4303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:08.502210 containerd[1427]: 2025-08-12 23:57:08.475 [INFO][4303] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" HandleID="k8s-pod-network.e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:08.503033 containerd[1427]: 2025-08-12 23:57:08.480 [INFO][4274] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Namespace="calico-system" Pod="csi-node-driver-w79sn" WorkloadEndpoint="localhost-k8s-csi--node--driver--w79sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w79sn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0354a12-b391-42fd-af3e-1ce2798bd729", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w79sn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc86096231a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:08.503033 containerd[1427]: 2025-08-12 23:57:08.480 [INFO][4274] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Namespace="calico-system" Pod="csi-node-driver-w79sn" WorkloadEndpoint="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:08.503033 containerd[1427]: 2025-08-12 23:57:08.480 [INFO][4274] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc86096231a ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Namespace="calico-system" Pod="csi-node-driver-w79sn" WorkloadEndpoint="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:08.503033 containerd[1427]: 2025-08-12 23:57:08.481 [INFO][4274] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Namespace="calico-system" Pod="csi-node-driver-w79sn" WorkloadEndpoint="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:08.503033 containerd[1427]: 2025-08-12 23:57:08.482 [INFO][4274] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Namespace="calico-system" Pod="csi-node-driver-w79sn" WorkloadEndpoint="localhost-k8s-csi--node--driver--w79sn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w79sn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0354a12-b391-42fd-af3e-1ce2798bd729", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6", Pod:"csi-node-driver-w79sn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc86096231a", MAC:"ea:b2:5a:1b:23:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:08.503033 containerd[1427]: 2025-08-12 23:57:08.495 [INFO][4274] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6" Namespace="calico-system" Pod="csi-node-driver-w79sn" WorkloadEndpoint="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:08.520836 containerd[1427]: time="2025-08-12T23:57:08.520660584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:08.520836 containerd[1427]: time="2025-08-12T23:57:08.520749704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:08.520836 containerd[1427]: time="2025-08-12T23:57:08.520804584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:08.521102 containerd[1427]: time="2025-08-12T23:57:08.520921904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:08.551195 systemd[1]: Started cri-containerd-e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6.scope - libcontainer container e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6. Aug 12 23:57:08.568372 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:57:08.581019 systemd-networkd[1365]: cali64f83a8d707: Link UP Aug 12 23:57:08.582161 systemd-networkd[1365]: cali64f83a8d707: Gained carrier Aug 12 23:57:08.590273 containerd[1427]: time="2025-08-12T23:57:08.589844787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w79sn,Uid:c0354a12-b391-42fd-af3e-1ce2798bd729,Namespace:calico-system,Attempt:1,} returns sandbox id \"e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6\"" Aug 12 23:57:08.595431 containerd[1427]: time="2025-08-12T23:57:08.595213748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.399 [INFO][4289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0 calico-apiserver-7cfd595b89- calico-apiserver d0d70758-a4a3-42e9-8e23-756a91ca0f83 936 0 2025-08-12 23:56:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cfd595b89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cfd595b89-dfkdk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali64f83a8d707 [] [] }} ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-dfkdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.399 [INFO][4289] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-dfkdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.433 [INFO][4310] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" HandleID="k8s-pod-network.253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.433 [INFO][4310] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" HandleID="k8s-pod-network.253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cfd595b89-dfkdk", "timestamp":"2025-08-12 23:57:08.43351522 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.433 [INFO][4310] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.474 [INFO][4310] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.475 [INFO][4310] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.535 [INFO][4310] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" host="localhost" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.544 [INFO][4310] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.549 [INFO][4310] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.554 [INFO][4310] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.557 [INFO][4310] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.557 [INFO][4310] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" host="localhost" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.559 [INFO][4310] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771 Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.563 [INFO][4310] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" host="localhost" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.572 [INFO][4310] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" host="localhost" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.572 [INFO][4310] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" host="localhost" Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.572 [INFO][4310] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:08.607669 containerd[1427]: 2025-08-12 23:57:08.572 [INFO][4310] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" HandleID="k8s-pod-network.253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:08.608314 containerd[1427]: 2025-08-12 23:57:08.576 [INFO][4289] cni-plugin/k8s.go 418: Populated endpoint ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-dfkdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0", GenerateName:"calico-apiserver-7cfd595b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0d70758-a4a3-42e9-8e23-756a91ca0f83", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd595b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cfd595b89-dfkdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64f83a8d707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:08.608314 containerd[1427]: 2025-08-12 23:57:08.576 [INFO][4289] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-dfkdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:08.608314 containerd[1427]: 2025-08-12 23:57:08.576 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali64f83a8d707 ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-dfkdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:08.608314 containerd[1427]: 2025-08-12 23:57:08.583 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-dfkdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:08.608314 containerd[1427]: 2025-08-12 23:57:08.590 [INFO][4289] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-dfkdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0", GenerateName:"calico-apiserver-7cfd595b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0d70758-a4a3-42e9-8e23-756a91ca0f83", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd595b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771", Pod:"calico-apiserver-7cfd595b89-dfkdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64f83a8d707", MAC:"96:bd:8a:30:0b:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:08.608314 containerd[1427]: 2025-08-12 23:57:08.603 [INFO][4289] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-dfkdk" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:08.631767 containerd[1427]: time="2025-08-12T23:57:08.630840189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:08.631767 containerd[1427]: time="2025-08-12T23:57:08.631548909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:08.631767 containerd[1427]: time="2025-08-12T23:57:08.631567629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:08.632035 containerd[1427]: time="2025-08-12T23:57:08.631710029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:08.653195 systemd[1]: Started cri-containerd-253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771.scope - libcontainer container 253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771. Aug 12 23:57:08.668005 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:57:08.688094 containerd[1427]: time="2025-08-12T23:57:08.688029432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd595b89-dfkdk,Uid:d0d70758-a4a3-42e9-8e23-756a91ca0f83,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771\"" Aug 12 23:57:09.176826 containerd[1427]: time="2025-08-12T23:57:09.176772896Z" level=info msg="StopPodSandbox for \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\"" Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.232 [INFO][4432] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.232 [INFO][4432] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" iface="eth0" netns="/var/run/netns/cni-662cf24e-131a-d162-18da-dd80af56750c" Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.232 [INFO][4432] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" iface="eth0" netns="/var/run/netns/cni-662cf24e-131a-d162-18da-dd80af56750c" Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.232 [INFO][4432] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" iface="eth0" netns="/var/run/netns/cni-662cf24e-131a-d162-18da-dd80af56750c" Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.232 [INFO][4432] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.232 [INFO][4432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.259 [INFO][4440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" HandleID="k8s-pod-network.406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.259 [INFO][4440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.259 [INFO][4440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.271 [WARNING][4440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" HandleID="k8s-pod-network.406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.271 [INFO][4440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" HandleID="k8s-pod-network.406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.273 [INFO][4440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:09.278933 containerd[1427]: 2025-08-12 23:57:09.276 [INFO][4432] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:09.279746 containerd[1427]: time="2025-08-12T23:57:09.279100460Z" level=info msg="TearDown network for sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\" successfully" Aug 12 23:57:09.279746 containerd[1427]: time="2025-08-12T23:57:09.279129140Z" level=info msg="StopPodSandbox for \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\" returns successfully" Aug 12 23:57:09.280146 containerd[1427]: time="2025-08-12T23:57:09.279840540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597d6cb8d8-g8nzb,Uid:ad64c6a3-30ab-4a17-9b28-e049d25c1e5b,Namespace:calico-system,Attempt:1,}" Aug 12 23:57:09.311075 systemd[1]: run-netns-cni\x2d662cf24e\x2d131a\x2dd162\x2d18da\x2ddd80af56750c.mount: Deactivated successfully. Aug 12 23:57:09.434266 systemd-networkd[1365]: cali80f4d5d483f: Link UP Aug 12 23:57:09.434611 systemd-networkd[1365]: cali80f4d5d483f: Gained carrier Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.344 [INFO][4448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0 calico-kube-controllers-597d6cb8d8- calico-system ad64c6a3-30ab-4a17-9b28-e049d25c1e5b 947 0 2025-08-12 23:56:47 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:597d6cb8d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-597d6cb8d8-g8nzb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali80f4d5d483f [] [] }} ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Namespace="calico-system" Pod="calico-kube-controllers-597d6cb8d8-g8nzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.344 [INFO][4448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Namespace="calico-system" Pod="calico-kube-controllers-597d6cb8d8-g8nzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.382 [INFO][4462] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" HandleID="k8s-pod-network.7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.382 [INFO][4462] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" HandleID="k8s-pod-network.7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137b80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-597d6cb8d8-g8nzb", "timestamp":"2025-08-12 23:57:09.382174945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.382 [INFO][4462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.382 [INFO][4462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.382 [INFO][4462] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.396 [INFO][4462] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" host="localhost" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.404 [INFO][4462] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.409 [INFO][4462] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.412 [INFO][4462] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.417 [INFO][4462] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.417 [INFO][4462] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" host="localhost" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.418 [INFO][4462] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16 Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.423 [INFO][4462] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" host="localhost" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.429 [INFO][4462] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" host="localhost" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.429 [INFO][4462] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" host="localhost" Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.429 [INFO][4462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:09.450816 containerd[1427]: 2025-08-12 23:57:09.429 [INFO][4462] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" HandleID="k8s-pod-network.7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:09.452140 containerd[1427]: 2025-08-12 23:57:09.432 [INFO][4448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Namespace="calico-system" Pod="calico-kube-controllers-597d6cb8d8-g8nzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0", GenerateName:"calico-kube-controllers-597d6cb8d8-", Namespace:"calico-system", SelfLink:"", UID:"ad64c6a3-30ab-4a17-9b28-e049d25c1e5b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597d6cb8d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-597d6cb8d8-g8nzb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80f4d5d483f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:09.452140 containerd[1427]: 2025-08-12 23:57:09.432 [INFO][4448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Namespace="calico-system" Pod="calico-kube-controllers-597d6cb8d8-g8nzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:09.452140 containerd[1427]: 2025-08-12 23:57:09.432 [INFO][4448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80f4d5d483f ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Namespace="calico-system" Pod="calico-kube-controllers-597d6cb8d8-g8nzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:09.452140 containerd[1427]: 2025-08-12 23:57:09.434 [INFO][4448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Namespace="calico-system" Pod="calico-kube-controllers-597d6cb8d8-g8nzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:09.452140 containerd[1427]: 2025-08-12 23:57:09.434 [INFO][4448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Namespace="calico-system" Pod="calico-kube-controllers-597d6cb8d8-g8nzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0", GenerateName:"calico-kube-controllers-597d6cb8d8-", Namespace:"calico-system", SelfLink:"", UID:"ad64c6a3-30ab-4a17-9b28-e049d25c1e5b", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597d6cb8d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16", Pod:"calico-kube-controllers-597d6cb8d8-g8nzb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80f4d5d483f", MAC:"c2:b0:07:6c:cd:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:09.452140 containerd[1427]: 2025-08-12 23:57:09.448 [INFO][4448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16" Namespace="calico-system" Pod="calico-kube-controllers-597d6cb8d8-g8nzb" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:09.487091 containerd[1427]: time="2025-08-12T23:57:09.486826270Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:09.487091 containerd[1427]: time="2025-08-12T23:57:09.486895950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:09.487091 containerd[1427]: time="2025-08-12T23:57:09.486907630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:09.487383 containerd[1427]: time="2025-08-12T23:57:09.487084950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:09.501766 systemd[1]: run-containerd-runc-k8s.io-7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16-runc.oeB3Xe.mount: Deactivated successfully. Aug 12 23:57:09.515319 systemd[1]: Started cri-containerd-7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16.scope - libcontainer container 7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16. Aug 12 23:57:09.528938 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:57:09.552655 containerd[1427]: time="2025-08-12T23:57:09.552612633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-597d6cb8d8-g8nzb,Uid:ad64c6a3-30ab-4a17-9b28-e049d25c1e5b,Namespace:calico-system,Attempt:1,} returns sandbox id \"7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16\"" Aug 12 23:57:09.697156 systemd-networkd[1365]: cali64f83a8d707: Gained IPv6LL Aug 12 23:57:09.889158 systemd-networkd[1365]: calibc86096231a: Gained IPv6LL Aug 12 23:57:10.152437 containerd[1427]: time="2025-08-12T23:57:10.152384260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:10.154601 containerd[1427]: time="2025-08-12T23:57:10.154564580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 12 23:57:10.155561 containerd[1427]: time="2025-08-12T23:57:10.155529020Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:10.158044 containerd[1427]: time="2025-08-12T23:57:10.158007500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:10.159379 containerd[1427]: time="2025-08-12T23:57:10.159336861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.563820233s" Aug 12 23:57:10.159422 containerd[1427]: time="2025-08-12T23:57:10.159378901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 12 23:57:10.160426 containerd[1427]: time="2025-08-12T23:57:10.160230101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:57:10.161543 containerd[1427]: time="2025-08-12T23:57:10.161487261Z" level=info msg="CreateContainer within sandbox \"e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 12 23:57:10.177214 containerd[1427]: time="2025-08-12T23:57:10.177154381Z" level=info msg="StopPodSandbox for \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\"" Aug 12 23:57:10.186409 containerd[1427]: time="2025-08-12T23:57:10.186352542Z" level=info msg="CreateContainer within sandbox \"e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9d18b554467ceebb65dc7e571421396b81c932e53d096ee15e4c05ba74fe78c5\"" Aug 12 23:57:10.187174 containerd[1427]: time="2025-08-12T23:57:10.186733062Z" level=info msg="StopPodSandbox for \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\"" Aug 12 23:57:10.191225 containerd[1427]: time="2025-08-12T23:57:10.191174702Z" level=info msg="StartContainer for \"9d18b554467ceebb65dc7e571421396b81c932e53d096ee15e4c05ba74fe78c5\"" Aug 12 23:57:10.228212 systemd[1]: Started cri-containerd-9d18b554467ceebb65dc7e571421396b81c932e53d096ee15e4c05ba74fe78c5.scope - libcontainer container 9d18b554467ceebb65dc7e571421396b81c932e53d096ee15e4c05ba74fe78c5. Aug 12 23:57:10.280399 containerd[1427]: time="2025-08-12T23:57:10.280319986Z" level=info msg="StartContainer for \"9d18b554467ceebb65dc7e571421396b81c932e53d096ee15e4c05ba74fe78c5\" returns successfully" Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.280 [INFO][4545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.280 [INFO][4545] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" iface="eth0" netns="/var/run/netns/cni-4287bec6-b7bb-693d-26e5-63522aee903d" Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.281 [INFO][4545] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" iface="eth0" netns="/var/run/netns/cni-4287bec6-b7bb-693d-26e5-63522aee903d" Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.281 [INFO][4545] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" iface="eth0" netns="/var/run/netns/cni-4287bec6-b7bb-693d-26e5-63522aee903d" Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.281 [INFO][4545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.281 [INFO][4545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.311 [INFO][4593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" HandleID="k8s-pod-network.704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.312 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.312 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.327 [WARNING][4593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" HandleID="k8s-pod-network.704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.327 [INFO][4593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" HandleID="k8s-pod-network.704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.345 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:10.348098 containerd[1427]: 2025-08-12 23:57:10.346 [INFO][4545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:10.350410 containerd[1427]: time="2025-08-12T23:57:10.349051029Z" level=info msg="TearDown network for sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\" successfully" Aug 12 23:57:10.350410 containerd[1427]: time="2025-08-12T23:57:10.349078869Z" level=info msg="StopPodSandbox for \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\" returns successfully" Aug 12 23:57:10.350410 containerd[1427]: time="2025-08-12T23:57:10.349734349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd595b89-s4blw,Uid:2291631d-1fc3-422f-a756-efdd85b1d503,Namespace:calico-apiserver,Attempt:1,}" Aug 12 23:57:10.350991 systemd[1]: run-netns-cni\x2d4287bec6\x2db7bb\x2d693d\x2d26e5\x2d63522aee903d.mount: Deactivated successfully. Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.283 [INFO][4546] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.283 [INFO][4546] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" iface="eth0" netns="/var/run/netns/cni-de1f9927-a000-bbc9-9f63-b031e6633438" Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.283 [INFO][4546] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" iface="eth0" netns="/var/run/netns/cni-de1f9927-a000-bbc9-9f63-b031e6633438" Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.291 [INFO][4546] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" iface="eth0" netns="/var/run/netns/cni-de1f9927-a000-bbc9-9f63-b031e6633438" Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.291 [INFO][4546] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.291 [INFO][4546] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.323 [INFO][4599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" HandleID="k8s-pod-network.7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.323 [INFO][4599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.345 [INFO][4599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.355 [WARNING][4599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" HandleID="k8s-pod-network.7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.355 [INFO][4599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" HandleID="k8s-pod-network.7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.356 [INFO][4599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:10.360944 containerd[1427]: 2025-08-12 23:57:10.358 [INFO][4546] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:10.361743 containerd[1427]: time="2025-08-12T23:57:10.361609669Z" level=info msg="TearDown network for sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\" successfully" Aug 12 23:57:10.361743 containerd[1427]: time="2025-08-12T23:57:10.361645029Z" level=info msg="StopPodSandbox for \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\" returns successfully" Aug 12 23:57:10.362254 kubelet[2453]: E0812 23:57:10.362220 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:10.362642 containerd[1427]: time="2025-08-12T23:57:10.362576029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zvjr6,Uid:c162d718-8d24-407d-9e4c-2cf3d4f42ab4,Namespace:kube-system,Attempt:1,}" Aug 12 23:57:10.367062 systemd[1]: run-netns-cni\x2dde1f9927\x2da000\x2dbbc9\x2d9f63\x2db031e6633438.mount: Deactivated successfully. Aug 12 23:57:10.542800 systemd-networkd[1365]: cali844968967e9: Link UP Aug 12 23:57:10.544261 systemd-networkd[1365]: cali844968967e9: Gained carrier Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.423 [INFO][4610] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0 calico-apiserver-7cfd595b89- calico-apiserver 2291631d-1fc3-422f-a756-efdd85b1d503 962 0 2025-08-12 23:56:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cfd595b89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cfd595b89-s4blw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali844968967e9 [] [] }} ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-s4blw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.423 [INFO][4610] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-s4blw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.462 [INFO][4638] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" HandleID="k8s-pod-network.246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.462 [INFO][4638] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" HandleID="k8s-pod-network.246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab4d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cfd595b89-s4blw", "timestamp":"2025-08-12 23:57:10.462650714 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.462 [INFO][4638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.463 [INFO][4638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.463 [INFO][4638] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.472 [INFO][4638] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" host="localhost" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.477 [INFO][4638] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.498 [INFO][4638] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.500 [INFO][4638] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.502 [INFO][4638] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.502 [INFO][4638] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" host="localhost" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.504 [INFO][4638] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.514 [INFO][4638] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" host="localhost" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.532 [INFO][4638] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" host="localhost" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.532 [INFO][4638] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" host="localhost" Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.532 [INFO][4638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:10.561204 containerd[1427]: 2025-08-12 23:57:10.532 [INFO][4638] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" HandleID="k8s-pod-network.246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:10.561864 containerd[1427]: 2025-08-12 23:57:10.538 [INFO][4610] cni-plugin/k8s.go 418: Populated endpoint ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-s4blw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0", GenerateName:"calico-apiserver-7cfd595b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"2291631d-1fc3-422f-a756-efdd85b1d503", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd595b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cfd595b89-s4blw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali844968967e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:10.561864 containerd[1427]: 2025-08-12 23:57:10.538 [INFO][4610] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-s4blw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:10.561864 containerd[1427]: 2025-08-12 23:57:10.538 [INFO][4610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali844968967e9 ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-s4blw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:10.561864 containerd[1427]: 2025-08-12 23:57:10.548 [INFO][4610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-s4blw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:10.561864 containerd[1427]: 2025-08-12 23:57:10.548 [INFO][4610] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-s4blw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0", GenerateName:"calico-apiserver-7cfd595b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"2291631d-1fc3-422f-a756-efdd85b1d503", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd595b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c", Pod:"calico-apiserver-7cfd595b89-s4blw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali844968967e9", MAC:"be:5b:ca:8e:a7:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:10.561864 containerd[1427]: 2025-08-12 23:57:10.558 [INFO][4610] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c" Namespace="calico-apiserver" Pod="calico-apiserver-7cfd595b89-s4blw" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:10.580019 containerd[1427]: time="2025-08-12T23:57:10.579854559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:10.580019 containerd[1427]: time="2025-08-12T23:57:10.579927919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:10.580019 containerd[1427]: time="2025-08-12T23:57:10.579943959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:10.581185 containerd[1427]: time="2025-08-12T23:57:10.580090159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:10.609479 systemd[1]: Started cri-containerd-246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c.scope - libcontainer container 246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c. Aug 12 23:57:10.610904 systemd-networkd[1365]: cali5ceb9fc4dfa: Link UP Aug 12 23:57:10.611114 systemd-networkd[1365]: cali5ceb9fc4dfa: Gained carrier Aug 12 23:57:10.623884 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.421 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0 coredns-7c65d6cfc9- kube-system c162d718-8d24-407d-9e4c-2cf3d4f42ab4 963 0 2025-08-12 23:56:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-zvjr6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ceb9fc4dfa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvjr6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvjr6-" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.421 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvjr6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.465 [INFO][4643] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" HandleID="k8s-pod-network.8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.465 [INFO][4643] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" HandleID="k8s-pod-network.8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502ad0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-zvjr6", "timestamp":"2025-08-12 23:57:10.465344714 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.466 [INFO][4643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.532 [INFO][4643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.532 [INFO][4643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.573 [INFO][4643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" host="localhost" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.581 [INFO][4643] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.585 [INFO][4643] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.588 [INFO][4643] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.590 [INFO][4643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.590 [INFO][4643] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" host="localhost" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.592 [INFO][4643] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448 Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.597 [INFO][4643] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" host="localhost" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.605 [INFO][4643] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" host="localhost" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.605 [INFO][4643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" host="localhost" Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.605 [INFO][4643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:10.629293 containerd[1427]: 2025-08-12 23:57:10.605 [INFO][4643] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" HandleID="k8s-pod-network.8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:10.630322 containerd[1427]: 2025-08-12 23:57:10.607 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvjr6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c162d718-8d24-407d-9e4c-2cf3d4f42ab4", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-zvjr6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ceb9fc4dfa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:10.630322 containerd[1427]: 2025-08-12 23:57:10.607 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvjr6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:10.630322 containerd[1427]: 2025-08-12 23:57:10.607 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ceb9fc4dfa ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvjr6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:10.630322 containerd[1427]: 2025-08-12 23:57:10.610 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvjr6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:10.630322 containerd[1427]: 2025-08-12 23:57:10.610 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvjr6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c162d718-8d24-407d-9e4c-2cf3d4f42ab4", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448", Pod:"coredns-7c65d6cfc9-zvjr6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ceb9fc4dfa", MAC:"ca:b0:a6:5b:bf:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:10.630322 containerd[1427]: 2025-08-12 23:57:10.625 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448" Namespace="kube-system" Pod="coredns-7c65d6cfc9-zvjr6" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:10.646904 containerd[1427]: time="2025-08-12T23:57:10.646863642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cfd595b89-s4blw,Uid:2291631d-1fc3-422f-a756-efdd85b1d503,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c\"" Aug 12 23:57:10.658401 containerd[1427]: time="2025-08-12T23:57:10.653069082Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:10.658401 containerd[1427]: time="2025-08-12T23:57:10.653124082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:10.658401 containerd[1427]: time="2025-08-12T23:57:10.653139202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:10.658401 containerd[1427]: time="2025-08-12T23:57:10.653214522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:10.672141 systemd[1]: Started cri-containerd-8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448.scope - libcontainer container 8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448. Aug 12 23:57:10.701548 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:57:10.743158 containerd[1427]: time="2025-08-12T23:57:10.743076926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zvjr6,Uid:c162d718-8d24-407d-9e4c-2cf3d4f42ab4,Namespace:kube-system,Attempt:1,} returns sandbox id \"8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448\"" Aug 12 23:57:10.743923 kubelet[2453]: E0812 23:57:10.743902 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:10.747937 containerd[1427]: time="2025-08-12T23:57:10.747897886Z" level=info msg="CreateContainer within sandbox \"8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:57:10.824350 containerd[1427]: time="2025-08-12T23:57:10.824113449Z" level=info msg="CreateContainer within sandbox \"8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7676905be40b31fb4ac13f01045426d56a59233b515b94072e9dae79226ee0ff\"" Aug 12 23:57:10.824765 containerd[1427]: time="2025-08-12T23:57:10.824703609Z" level=info msg="StartContainer for \"7676905be40b31fb4ac13f01045426d56a59233b515b94072e9dae79226ee0ff\"" Aug 12 23:57:10.850192 systemd[1]: Started cri-containerd-7676905be40b31fb4ac13f01045426d56a59233b515b94072e9dae79226ee0ff.scope - libcontainer container 7676905be40b31fb4ac13f01045426d56a59233b515b94072e9dae79226ee0ff. Aug 12 23:57:10.882105 containerd[1427]: time="2025-08-12T23:57:10.882058692Z" level=info msg="StartContainer for \"7676905be40b31fb4ac13f01045426d56a59233b515b94072e9dae79226ee0ff\" returns successfully" Aug 12 23:57:10.914459 systemd[1]: Started sshd@7-10.0.0.26:22-10.0.0.1:45714.service - OpenSSH per-connection server daemon (10.0.0.1:45714). Aug 12 23:57:10.979318 sshd[4794]: Accepted publickey for core from 10.0.0.1 port 45714 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:10.981034 sshd[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:10.985849 systemd-logind[1412]: New session 8 of user core. Aug 12 23:57:10.992163 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:57:11.177434 containerd[1427]: time="2025-08-12T23:57:11.176806224Z" level=info msg="StopPodSandbox for \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\"" Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.238 [INFO][4822] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.238 [INFO][4822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" iface="eth0" netns="/var/run/netns/cni-2d4b024c-2a46-6e6e-45c5-fa73889dcca4" Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.238 [INFO][4822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" iface="eth0" netns="/var/run/netns/cni-2d4b024c-2a46-6e6e-45c5-fa73889dcca4" Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.238 [INFO][4822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" iface="eth0" netns="/var/run/netns/cni-2d4b024c-2a46-6e6e-45c5-fa73889dcca4" Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.238 [INFO][4822] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.238 [INFO][4822] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.264 [INFO][4831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" HandleID="k8s-pod-network.c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.264 [INFO][4831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.264 [INFO][4831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.274 [WARNING][4831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" HandleID="k8s-pod-network.c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.274 [INFO][4831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" HandleID="k8s-pod-network.c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.277 [INFO][4831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:11.283284 containerd[1427]: 2025-08-12 23:57:11.279 [INFO][4822] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:11.284364 containerd[1427]: time="2025-08-12T23:57:11.283582228Z" level=info msg="TearDown network for sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\" successfully" Aug 12 23:57:11.284364 containerd[1427]: time="2025-08-12T23:57:11.283612908Z" level=info msg="StopPodSandbox for \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\" returns successfully" Aug 12 23:57:11.284425 kubelet[2453]: E0812 23:57:11.283987 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:11.285655 containerd[1427]: time="2025-08-12T23:57:11.285525829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-777z4,Uid:b91aefd7-fa4c-469b-969b-9459b2f96cc9,Namespace:kube-system,Attempt:1,}" Aug 12 23:57:11.310777 systemd[1]: run-netns-cni\x2d2d4b024c\x2d2a46\x2d6e6e\x2d45c5\x2dfa73889dcca4.mount: Deactivated successfully. Aug 12 23:57:11.321373 sshd[4794]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:11.328772 systemd[1]: sshd@7-10.0.0.26:22-10.0.0.1:45714.service: Deactivated successfully. Aug 12 23:57:11.332073 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:57:11.334837 systemd-logind[1412]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:57:11.336753 systemd-logind[1412]: Removed session 8. Aug 12 23:57:11.378788 kubelet[2453]: E0812 23:57:11.378726 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:11.393314 kubelet[2453]: I0812 23:57:11.393234 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zvjr6" podStartSLOduration=37.393180233 podStartE2EDuration="37.393180233s" podCreationTimestamp="2025-08-12 23:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:57:11.393018833 +0000 UTC m=+43.321931071" watchObservedRunningTime="2025-08-12 23:57:11.393180233 +0000 UTC m=+43.322092471" Aug 12 23:57:11.429516 systemd-networkd[1365]: cali80f4d5d483f: Gained IPv6LL Aug 12 23:57:11.622136 systemd-networkd[1365]: cali7453d490b7b: Link UP Aug 12 23:57:11.622575 systemd-networkd[1365]: cali7453d490b7b: Gained carrier Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.355 [INFO][4844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--777z4-eth0 coredns-7c65d6cfc9- kube-system b91aefd7-fa4c-469b-969b-9459b2f96cc9 1011 0 2025-08-12 23:56:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-777z4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7453d490b7b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-777z4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--777z4-" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.355 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-777z4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.392 [INFO][4858] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" HandleID="k8s-pod-network.c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.393 [INFO][4858] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" HandleID="k8s-pod-network.c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2e70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-777z4", "timestamp":"2025-08-12 23:57:11.392968913 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.393 [INFO][4858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.393 [INFO][4858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.393 [INFO][4858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.408 [INFO][4858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" host="localhost" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.466 [INFO][4858] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.479 [INFO][4858] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.485 [INFO][4858] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.490 [INFO][4858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.490 [INFO][4858] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" host="localhost" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.494 [INFO][4858] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5 Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.510 [INFO][4858] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" host="localhost" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.610 [INFO][4858] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" host="localhost" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.610 [INFO][4858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" host="localhost" Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.610 [INFO][4858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:11.650719 containerd[1427]: 2025-08-12 23:57:11.610 [INFO][4858] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" HandleID="k8s-pod-network.c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:11.651730 containerd[1427]: 2025-08-12 23:57:11.616 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-777z4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--777z4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b91aefd7-fa4c-469b-969b-9459b2f96cc9", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-777z4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7453d490b7b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:11.651730 containerd[1427]: 2025-08-12 23:57:11.616 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-777z4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:11.651730 containerd[1427]: 2025-08-12 23:57:11.616 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7453d490b7b ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-777z4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:11.651730 containerd[1427]: 2025-08-12 23:57:11.622 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-777z4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:11.651730 containerd[1427]: 2025-08-12 23:57:11.627 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-777z4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--777z4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b91aefd7-fa4c-469b-969b-9459b2f96cc9", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5", Pod:"coredns-7c65d6cfc9-777z4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7453d490b7b", MAC:"aa:85:b8:3f:82:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:11.651730 containerd[1427]: 2025-08-12 23:57:11.643 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-777z4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:11.681348 systemd-networkd[1365]: cali844968967e9: Gained IPv6LL Aug 12 23:57:11.730030 containerd[1427]: time="2025-08-12T23:57:11.729381687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:11.730030 containerd[1427]: time="2025-08-12T23:57:11.729881807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:11.730030 containerd[1427]: time="2025-08-12T23:57:11.729895247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:11.730030 containerd[1427]: time="2025-08-12T23:57:11.730040847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:11.758149 systemd[1]: Started cri-containerd-c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5.scope - libcontainer container c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5. Aug 12 23:57:11.772180 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:57:11.793112 containerd[1427]: time="2025-08-12T23:57:11.792477529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-777z4,Uid:b91aefd7-fa4c-469b-969b-9459b2f96cc9,Namespace:kube-system,Attempt:1,} returns sandbox id \"c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5\"" Aug 12 23:57:11.793604 kubelet[2453]: E0812 23:57:11.793573 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:11.796630 containerd[1427]: time="2025-08-12T23:57:11.796590769Z" level=info msg="CreateContainer within sandbox \"c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:57:11.822184 containerd[1427]: time="2025-08-12T23:57:11.822135970Z" level=info msg="CreateContainer within sandbox \"c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ff072d7abf381c0fa94113daa13c00c08120a5cba9501f384432d8681a77ae18\"" Aug 12 23:57:11.822808 containerd[1427]: time="2025-08-12T23:57:11.822780450Z" level=info msg="StartContainer for \"ff072d7abf381c0fa94113daa13c00c08120a5cba9501f384432d8681a77ae18\"" Aug 12 23:57:11.866139 systemd[1]: Started cri-containerd-ff072d7abf381c0fa94113daa13c00c08120a5cba9501f384432d8681a77ae18.scope - libcontainer container ff072d7abf381c0fa94113daa13c00c08120a5cba9501f384432d8681a77ae18. Aug 12 23:57:11.916444 containerd[1427]: time="2025-08-12T23:57:11.916287134Z" level=info msg="StartContainer for \"ff072d7abf381c0fa94113daa13c00c08120a5cba9501f384432d8681a77ae18\" returns successfully" Aug 12 23:57:11.963471 containerd[1427]: time="2025-08-12T23:57:11.963349696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:11.964582 containerd[1427]: time="2025-08-12T23:57:11.964544256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 12 23:57:11.965645 containerd[1427]: time="2025-08-12T23:57:11.965612336Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:11.968995 containerd[1427]: time="2025-08-12T23:57:11.968214656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:11.969269 containerd[1427]: time="2025-08-12T23:57:11.969140656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.808875195s" Aug 12 23:57:11.969269 containerd[1427]: time="2025-08-12T23:57:11.969181376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:57:11.970975 containerd[1427]: time="2025-08-12T23:57:11.970780096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 12 23:57:11.971559 containerd[1427]: time="2025-08-12T23:57:11.971527576Z" level=info msg="CreateContainer within sandbox \"253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:57:11.982476 containerd[1427]: time="2025-08-12T23:57:11.982428897Z" level=info msg="CreateContainer within sandbox \"253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c02c961e43ebdff5f80c69ff882e414e71bc42f4e5abd2d7fb8f91f60e63a0ac\"" Aug 12 23:57:11.983213 containerd[1427]: time="2025-08-12T23:57:11.983185937Z" level=info msg="StartContainer for \"c02c961e43ebdff5f80c69ff882e414e71bc42f4e5abd2d7fb8f91f60e63a0ac\"" Aug 12 23:57:12.012164 systemd[1]: Started cri-containerd-c02c961e43ebdff5f80c69ff882e414e71bc42f4e5abd2d7fb8f91f60e63a0ac.scope - libcontainer container c02c961e43ebdff5f80c69ff882e414e71bc42f4e5abd2d7fb8f91f60e63a0ac. Aug 12 23:57:12.044787 containerd[1427]: time="2025-08-12T23:57:12.044710099Z" level=info msg="StartContainer for \"c02c961e43ebdff5f80c69ff882e414e71bc42f4e5abd2d7fb8f91f60e63a0ac\" returns successfully" Aug 12 23:57:12.065245 systemd-networkd[1365]: cali5ceb9fc4dfa: Gained IPv6LL Aug 12 23:57:12.178164 containerd[1427]: time="2025-08-12T23:57:12.178128824Z" level=info msg="StopPodSandbox for \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\"" Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.233 [INFO][5020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.233 [INFO][5020] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" iface="eth0" netns="/var/run/netns/cni-2efc523f-2427-39ca-e9ad-4fce8b85afac" Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.233 [INFO][5020] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" iface="eth0" netns="/var/run/netns/cni-2efc523f-2427-39ca-e9ad-4fce8b85afac" Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.234 [INFO][5020] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" iface="eth0" netns="/var/run/netns/cni-2efc523f-2427-39ca-e9ad-4fce8b85afac" Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.234 [INFO][5020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.234 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.255 [INFO][5028] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" HandleID="k8s-pod-network.1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.255 [INFO][5028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.255 [INFO][5028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.264 [WARNING][5028] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" HandleID="k8s-pod-network.1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.264 [INFO][5028] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" HandleID="k8s-pod-network.1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.266 [INFO][5028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:12.270730 containerd[1427]: 2025-08-12 23:57:12.268 [INFO][5020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:12.271651 containerd[1427]: time="2025-08-12T23:57:12.271517788Z" level=info msg="TearDown network for sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\" successfully" Aug 12 23:57:12.271651 containerd[1427]: time="2025-08-12T23:57:12.271552588Z" level=info msg="StopPodSandbox for \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\" returns successfully" Aug 12 23:57:12.272385 containerd[1427]: time="2025-08-12T23:57:12.272349068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-j5rs8,Uid:37a2d288-1878-44b7-b193-9a00f02f16f9,Namespace:calico-system,Attempt:1,}" Aug 12 23:57:12.310421 systemd[1]: run-netns-cni\x2d2efc523f\x2d2427\x2d39ca\x2de9ad\x2d4fce8b85afac.mount: Deactivated successfully. Aug 12 23:57:12.406163 kubelet[2453]: E0812 23:57:12.406097 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:12.407341 kubelet[2453]: E0812 23:57:12.407319 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:12.412671 systemd-networkd[1365]: calibb6ecebffae: Link UP Aug 12 23:57:12.417257 systemd-networkd[1365]: calibb6ecebffae: Gained carrier Aug 12 23:57:12.438814 kubelet[2453]: I0812 23:57:12.438747 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cfd595b89-dfkdk" podStartSLOduration=27.15804513 podStartE2EDuration="30.438730874s" podCreationTimestamp="2025-08-12 23:56:42 +0000 UTC" firstStartedPulling="2025-08-12 23:57:08.689285992 +0000 UTC m=+40.618198230" lastFinishedPulling="2025-08-12 23:57:11.969971736 +0000 UTC m=+43.898883974" observedRunningTime="2025-08-12 23:57:12.416893233 +0000 UTC m=+44.345805511" watchObservedRunningTime="2025-08-12 23:57:12.438730874 +0000 UTC m=+44.367643112" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.329 [INFO][5036] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0 goldmane-58fd7646b9- calico-system 37a2d288-1878-44b7-b193-9a00f02f16f9 1041 0 2025-08-12 23:56:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-j5rs8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibb6ecebffae [] [] }} ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Namespace="calico-system" Pod="goldmane-58fd7646b9-j5rs8" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--j5rs8-" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.329 [INFO][5036] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Namespace="calico-system" Pod="goldmane-58fd7646b9-j5rs8" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.354 [INFO][5050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" HandleID="k8s-pod-network.df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.354 [INFO][5050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" HandleID="k8s-pod-network.df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-j5rs8", "timestamp":"2025-08-12 23:57:12.354168151 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.354 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.354 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.354 [INFO][5050] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.368 [INFO][5050] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" host="localhost" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.373 [INFO][5050] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.379 [INFO][5050] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.381 [INFO][5050] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.385 [INFO][5050] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.385 [INFO][5050] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" host="localhost" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.387 [INFO][5050] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559 Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.391 [INFO][5050] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" host="localhost" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.400 [INFO][5050] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" host="localhost" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.400 [INFO][5050] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" host="localhost" Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.400 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:12.445701 containerd[1427]: 2025-08-12 23:57:12.400 [INFO][5050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" HandleID="k8s-pod-network.df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:12.446681 containerd[1427]: 2025-08-12 23:57:12.406 [INFO][5036] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Namespace="calico-system" Pod="goldmane-58fd7646b9-j5rs8" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"37a2d288-1878-44b7-b193-9a00f02f16f9", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-j5rs8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibb6ecebffae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:12.446681 containerd[1427]: 2025-08-12 23:57:12.406 [INFO][5036] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Namespace="calico-system" Pod="goldmane-58fd7646b9-j5rs8" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:12.446681 containerd[1427]: 2025-08-12 23:57:12.406 [INFO][5036] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb6ecebffae ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Namespace="calico-system" Pod="goldmane-58fd7646b9-j5rs8" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:12.446681 containerd[1427]: 2025-08-12 23:57:12.412 [INFO][5036] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Namespace="calico-system" Pod="goldmane-58fd7646b9-j5rs8" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:12.446681 containerd[1427]: 2025-08-12 23:57:12.417 [INFO][5036] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Namespace="calico-system" Pod="goldmane-58fd7646b9-j5rs8" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"37a2d288-1878-44b7-b193-9a00f02f16f9", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559", Pod:"goldmane-58fd7646b9-j5rs8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibb6ecebffae", MAC:"3a:fb:e7:af:ef:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:12.446681 containerd[1427]: 2025-08-12 23:57:12.438 [INFO][5036] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559" Namespace="calico-system" Pod="goldmane-58fd7646b9-j5rs8" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:12.476970 containerd[1427]: time="2025-08-12T23:57:12.476821316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 12 23:57:12.476970 containerd[1427]: time="2025-08-12T23:57:12.476903596Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 12 23:57:12.476970 containerd[1427]: time="2025-08-12T23:57:12.476916196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:12.477197 containerd[1427]: time="2025-08-12T23:57:12.477054596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 12 23:57:12.509214 systemd[1]: Started cri-containerd-df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559.scope - libcontainer container df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559. Aug 12 23:57:12.521803 systemd-resolved[1305]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:57:12.549244 containerd[1427]: time="2025-08-12T23:57:12.549082838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-j5rs8,Uid:37a2d288-1878-44b7-b193-9a00f02f16f9,Namespace:calico-system,Attempt:1,} returns sandbox id \"df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559\"" Aug 12 23:57:13.412841 kubelet[2453]: E0812 23:57:13.411780 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:13.412841 kubelet[2453]: E0812 23:57:13.411881 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:13.506887 kubelet[2453]: I0812 23:57:13.506829 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-777z4" podStartSLOduration=39.506794434 podStartE2EDuration="39.506794434s" podCreationTimestamp="2025-08-12 23:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:57:12.441176114 +0000 UTC m=+44.370088352" watchObservedRunningTime="2025-08-12 23:57:13.506794434 +0000 UTC m=+45.435706672" Aug 12 23:57:13.601257 systemd-networkd[1365]: cali7453d490b7b: Gained IPv6LL Aug 12 23:57:13.752623 containerd[1427]: time="2025-08-12T23:57:13.752115162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:13.753125 containerd[1427]: time="2025-08-12T23:57:13.752675282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 12 23:57:13.753608 containerd[1427]: time="2025-08-12T23:57:13.753520922Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:13.757229 containerd[1427]: time="2025-08-12T23:57:13.757179043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:13.757958 containerd[1427]: time="2025-08-12T23:57:13.757839523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 1.787013667s" Aug 12 23:57:13.757958 containerd[1427]: time="2025-08-12T23:57:13.757880963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 12 23:57:13.759038 containerd[1427]: time="2025-08-12T23:57:13.759004523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 12 23:57:13.769276 containerd[1427]: time="2025-08-12T23:57:13.769220483Z" level=info msg="CreateContainer within sandbox \"7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 12 23:57:13.784761 containerd[1427]: time="2025-08-12T23:57:13.784708004Z" level=info msg="CreateContainer within sandbox \"7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7a6e11fe98215ecedccfe75a077811f402e7809cf95de3cf635f11053eb8ff7d\"" Aug 12 23:57:13.786105 containerd[1427]: time="2025-08-12T23:57:13.785368724Z" level=info msg="StartContainer for \"7a6e11fe98215ecedccfe75a077811f402e7809cf95de3cf635f11053eb8ff7d\"" Aug 12 23:57:13.818248 systemd[1]: Started cri-containerd-7a6e11fe98215ecedccfe75a077811f402e7809cf95de3cf635f11053eb8ff7d.scope - libcontainer container 7a6e11fe98215ecedccfe75a077811f402e7809cf95de3cf635f11053eb8ff7d. Aug 12 23:57:13.874083 containerd[1427]: time="2025-08-12T23:57:13.873221927Z" level=info msg="StartContainer for \"7a6e11fe98215ecedccfe75a077811f402e7809cf95de3cf635f11053eb8ff7d\" returns successfully" Aug 12 23:57:14.114322 systemd-networkd[1365]: calibb6ecebffae: Gained IPv6LL Aug 12 23:57:14.417344 kubelet[2453]: E0812 23:57:14.417307 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:14.436612 kubelet[2453]: I0812 23:57:14.436495 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-597d6cb8d8-g8nzb" podStartSLOduration=23.232087776 podStartE2EDuration="27.436474426s" podCreationTimestamp="2025-08-12 23:56:47 +0000 UTC" firstStartedPulling="2025-08-12 23:57:09.554455193 +0000 UTC m=+41.483367431" lastFinishedPulling="2025-08-12 23:57:13.758841843 +0000 UTC m=+45.687754081" observedRunningTime="2025-08-12 23:57:14.436128826 +0000 UTC m=+46.365041064" watchObservedRunningTime="2025-08-12 23:57:14.436474426 +0000 UTC m=+46.365386664" Aug 12 23:57:14.942658 containerd[1427]: time="2025-08-12T23:57:14.942600963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:14.947977 containerd[1427]: time="2025-08-12T23:57:14.947893523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 12 23:57:14.950537 containerd[1427]: time="2025-08-12T23:57:14.950467323Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:14.953331 containerd[1427]: time="2025-08-12T23:57:14.953099683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:14.954493 containerd[1427]: time="2025-08-12T23:57:14.954257523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.19521688s" Aug 12 23:57:14.954493 containerd[1427]: time="2025-08-12T23:57:14.954298603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 12 23:57:14.956159 containerd[1427]: time="2025-08-12T23:57:14.955627083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:57:14.957295 containerd[1427]: time="2025-08-12T23:57:14.957164523Z" level=info msg="CreateContainer within sandbox \"e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 12 23:57:14.976927 containerd[1427]: time="2025-08-12T23:57:14.976877444Z" level=info msg="CreateContainer within sandbox \"e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"82b41940143213bc0e84e03880e20fd31af301914c5e6ca0892527beb21a328a\"" Aug 12 23:57:14.978002 containerd[1427]: time="2025-08-12T23:57:14.977715084Z" level=info msg="StartContainer for \"82b41940143213bc0e84e03880e20fd31af301914c5e6ca0892527beb21a328a\"" Aug 12 23:57:15.011163 systemd[1]: Started cri-containerd-82b41940143213bc0e84e03880e20fd31af301914c5e6ca0892527beb21a328a.scope - libcontainer container 82b41940143213bc0e84e03880e20fd31af301914c5e6ca0892527beb21a328a. Aug 12 23:57:15.049701 containerd[1427]: time="2025-08-12T23:57:15.049331086Z" level=info msg="StartContainer for \"82b41940143213bc0e84e03880e20fd31af301914c5e6ca0892527beb21a328a\" returns successfully" Aug 12 23:57:15.254922 kubelet[2453]: I0812 23:57:15.254795 2453 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 12 23:57:15.258195 kubelet[2453]: I0812 23:57:15.258148 2453 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 12 23:57:15.347608 containerd[1427]: time="2025-08-12T23:57:15.347558456Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:15.349133 containerd[1427]: time="2025-08-12T23:57:15.349073616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 12 23:57:15.355385 containerd[1427]: time="2025-08-12T23:57:15.355326496Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 399.659853ms" Aug 12 23:57:15.355385 containerd[1427]: time="2025-08-12T23:57:15.355383696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:57:15.356433 containerd[1427]: time="2025-08-12T23:57:15.356318016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 12 23:57:15.357856 containerd[1427]: time="2025-08-12T23:57:15.357800216Z" level=info msg="CreateContainer within sandbox \"246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:57:15.373618 containerd[1427]: time="2025-08-12T23:57:15.373391256Z" level=info msg="CreateContainer within sandbox \"246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"967ea6a28c53e25b45917978bb480fe7c834dc2cbac23a79f2ec135192d4c5d3\"" Aug 12 23:57:15.373983 containerd[1427]: time="2025-08-12T23:57:15.373881576Z" level=info msg="StartContainer for \"967ea6a28c53e25b45917978bb480fe7c834dc2cbac23a79f2ec135192d4c5d3\"" Aug 12 23:57:15.413588 systemd[1]: Started cri-containerd-967ea6a28c53e25b45917978bb480fe7c834dc2cbac23a79f2ec135192d4c5d3.scope - libcontainer container 967ea6a28c53e25b45917978bb480fe7c834dc2cbac23a79f2ec135192d4c5d3. Aug 12 23:57:15.447168 kubelet[2453]: I0812 23:57:15.446841 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w79sn" podStartSLOduration=22.084965003 podStartE2EDuration="28.446822459s" podCreationTimestamp="2025-08-12 23:56:47 +0000 UTC" firstStartedPulling="2025-08-12 23:57:08.593476547 +0000 UTC m=+40.522388745" lastFinishedPulling="2025-08-12 23:57:14.955333963 +0000 UTC m=+46.884246201" observedRunningTime="2025-08-12 23:57:15.446360819 +0000 UTC m=+47.375273017" watchObservedRunningTime="2025-08-12 23:57:15.446822459 +0000 UTC m=+47.375734697" Aug 12 23:57:15.557359 containerd[1427]: time="2025-08-12T23:57:15.557151542Z" level=info msg="StartContainer for \"967ea6a28c53e25b45917978bb480fe7c834dc2cbac23a79f2ec135192d4c5d3\" returns successfully" Aug 12 23:57:16.345307 systemd[1]: Started sshd@8-10.0.0.26:22-10.0.0.1:34970.service - OpenSSH per-connection server daemon (10.0.0.1:34970). Aug 12 23:57:16.435877 sshd[5280]: Accepted publickey for core from 10.0.0.1 port 34970 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:16.441996 sshd[5280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:16.455531 systemd-logind[1412]: New session 9 of user core. Aug 12 23:57:16.466075 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:57:16.892078 sshd[5280]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:16.897575 systemd[1]: sshd@8-10.0.0.26:22-10.0.0.1:34970.service: Deactivated successfully. Aug 12 23:57:16.903150 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:57:16.906737 systemd-logind[1412]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:57:16.908335 systemd-logind[1412]: Removed session 9. Aug 12 23:57:17.186724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2247567161.mount: Deactivated successfully. Aug 12 23:57:17.440998 kubelet[2453]: I0812 23:57:17.440851 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:57:17.731074 containerd[1427]: time="2025-08-12T23:57:17.729981326Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:17.740304 containerd[1427]: time="2025-08-12T23:57:17.740055366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 12 23:57:17.850505 containerd[1427]: time="2025-08-12T23:57:17.850001929Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:17.870865 containerd[1427]: time="2025-08-12T23:57:17.870804409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:57:17.872207 containerd[1427]: time="2025-08-12T23:57:17.872040929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.515681073s" Aug 12 23:57:17.872207 containerd[1427]: time="2025-08-12T23:57:17.872108529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 12 23:57:17.874765 containerd[1427]: time="2025-08-12T23:57:17.874713490Z" level=info msg="CreateContainer within sandbox \"df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 12 23:57:17.932993 containerd[1427]: time="2025-08-12T23:57:17.932871651Z" level=info msg="CreateContainer within sandbox \"df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"40fb8f0c4aae3cf2b43cf7cf839df2eda98fc2a29fcfb708888d3f98139c480b\"" Aug 12 23:57:17.934254 containerd[1427]: time="2025-08-12T23:57:17.933789811Z" level=info msg="StartContainer for \"40fb8f0c4aae3cf2b43cf7cf839df2eda98fc2a29fcfb708888d3f98139c480b\"" Aug 12 23:57:17.988237 systemd[1]: Started cri-containerd-40fb8f0c4aae3cf2b43cf7cf839df2eda98fc2a29fcfb708888d3f98139c480b.scope - libcontainer container 40fb8f0c4aae3cf2b43cf7cf839df2eda98fc2a29fcfb708888d3f98139c480b. Aug 12 23:57:18.061392 containerd[1427]: time="2025-08-12T23:57:18.061184535Z" level=info msg="StartContainer for \"40fb8f0c4aae3cf2b43cf7cf839df2eda98fc2a29fcfb708888d3f98139c480b\" returns successfully" Aug 12 23:57:18.470616 kubelet[2453]: I0812 23:57:18.470550 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cfd595b89-s4blw" podStartSLOduration=31.763539411 podStartE2EDuration="36.470530985s" podCreationTimestamp="2025-08-12 23:56:42 +0000 UTC" firstStartedPulling="2025-08-12 23:57:10.649164962 +0000 UTC m=+42.578077200" lastFinishedPulling="2025-08-12 23:57:15.356156536 +0000 UTC m=+47.285068774" observedRunningTime="2025-08-12 23:57:16.47311465 +0000 UTC m=+48.402026888" watchObservedRunningTime="2025-08-12 23:57:18.470530985 +0000 UTC m=+50.399443223" Aug 12 23:57:18.471505 kubelet[2453]: I0812 23:57:18.470831 2453 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-j5rs8" podStartSLOduration=26.148975974 podStartE2EDuration="31.470824945s" podCreationTimestamp="2025-08-12 23:56:47 +0000 UTC" firstStartedPulling="2025-08-12 23:57:12.551074078 +0000 UTC m=+44.479986316" lastFinishedPulling="2025-08-12 23:57:17.872923049 +0000 UTC m=+49.801835287" observedRunningTime="2025-08-12 23:57:18.470389745 +0000 UTC m=+50.399301983" watchObservedRunningTime="2025-08-12 23:57:18.470824945 +0000 UTC m=+50.399737183" Aug 12 23:57:18.478185 systemd[1]: run-containerd-runc-k8s.io-40fb8f0c4aae3cf2b43cf7cf839df2eda98fc2a29fcfb708888d3f98139c480b-runc.o8ktIx.mount: Deactivated successfully. Aug 12 23:57:21.902700 systemd[1]: Started sshd@9-10.0.0.26:22-10.0.0.1:34982.service - OpenSSH per-connection server daemon (10.0.0.1:34982). Aug 12 23:57:21.960680 sshd[5425]: Accepted publickey for core from 10.0.0.1 port 34982 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:21.962661 sshd[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:21.966864 systemd-logind[1412]: New session 10 of user core. Aug 12 23:57:21.976312 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:57:22.303156 sshd[5425]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:22.309649 systemd[1]: sshd@9-10.0.0.26:22-10.0.0.1:34982.service: Deactivated successfully. Aug 12 23:57:22.312294 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:57:22.313773 systemd-logind[1412]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:57:22.321362 systemd[1]: Started sshd@10-10.0.0.26:22-10.0.0.1:34986.service - OpenSSH per-connection server daemon (10.0.0.1:34986). Aug 12 23:57:22.322694 systemd-logind[1412]: Removed session 10. Aug 12 23:57:22.368086 sshd[5441]: Accepted publickey for core from 10.0.0.1 port 34986 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:22.369634 sshd[5441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:22.377442 systemd-logind[1412]: New session 11 of user core. Aug 12 23:57:22.391317 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:57:22.610483 sshd[5441]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:22.619116 systemd[1]: sshd@10-10.0.0.26:22-10.0.0.1:34986.service: Deactivated successfully. Aug 12 23:57:22.623230 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:57:22.624316 systemd-logind[1412]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:57:22.636185 systemd[1]: Started sshd@11-10.0.0.26:22-10.0.0.1:59166.service - OpenSSH per-connection server daemon (10.0.0.1:59166). Aug 12 23:57:22.639483 systemd-logind[1412]: Removed session 11. Aug 12 23:57:22.677969 sshd[5462]: Accepted publickey for core from 10.0.0.1 port 59166 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:22.678887 sshd[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:22.683240 systemd-logind[1412]: New session 12 of user core. Aug 12 23:57:22.691158 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:57:22.863708 sshd[5462]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:22.867490 systemd[1]: sshd@11-10.0.0.26:22-10.0.0.1:59166.service: Deactivated successfully. Aug 12 23:57:22.869495 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:57:22.870331 systemd-logind[1412]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:57:22.871354 systemd-logind[1412]: Removed session 12. Aug 12 23:57:27.876681 systemd[1]: Started sshd@12-10.0.0.26:22-10.0.0.1:59170.service - OpenSSH per-connection server daemon (10.0.0.1:59170). Aug 12 23:57:27.919482 sshd[5503]: Accepted publickey for core from 10.0.0.1 port 59170 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:27.921486 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:27.927924 systemd-logind[1412]: New session 13 of user core. Aug 12 23:57:27.939460 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 12 23:57:28.084888 sshd[5503]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:28.102485 systemd[1]: sshd@12-10.0.0.26:22-10.0.0.1:59170.service: Deactivated successfully. Aug 12 23:57:28.104465 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:57:28.106170 systemd-logind[1412]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:57:28.107711 systemd[1]: Started sshd@13-10.0.0.26:22-10.0.0.1:59172.service - OpenSSH per-connection server daemon (10.0.0.1:59172). Aug 12 23:57:28.110762 systemd-logind[1412]: Removed session 13. Aug 12 23:57:28.155514 containerd[1427]: time="2025-08-12T23:57:28.155452492Z" level=info msg="StopPodSandbox for \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\"" Aug 12 23:57:28.164990 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 59172 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:28.165019 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:28.171156 systemd-logind[1412]: New session 14 of user core. Aug 12 23:57:28.180315 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.214 [WARNING][5529] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c162d718-8d24-407d-9e4c-2cf3d4f42ab4", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448", Pod:"coredns-7c65d6cfc9-zvjr6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ceb9fc4dfa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.215 [INFO][5529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.215 [INFO][5529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" iface="eth0" netns="" Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.215 [INFO][5529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.215 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.247 [INFO][5540] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" HandleID="k8s-pod-network.7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.247 [INFO][5540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.248 [INFO][5540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.257 [WARNING][5540] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" HandleID="k8s-pod-network.7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.257 [INFO][5540] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" HandleID="k8s-pod-network.7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.260 [INFO][5540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:28.264968 containerd[1427]: 2025-08-12 23:57:28.263 [INFO][5529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:28.265674 containerd[1427]: time="2025-08-12T23:57:28.264982573Z" level=info msg="TearDown network for sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\" successfully" Aug 12 23:57:28.265674 containerd[1427]: time="2025-08-12T23:57:28.265008733Z" level=info msg="StopPodSandbox for \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\" returns successfully" Aug 12 23:57:28.266260 containerd[1427]: time="2025-08-12T23:57:28.266234093Z" level=info msg="RemovePodSandbox for \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\"" Aug 12 23:57:28.278626 containerd[1427]: time="2025-08-12T23:57:28.278554053Z" level=info msg="Forcibly stopping sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\"" Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.314 [WARNING][5563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c162d718-8d24-407d-9e4c-2cf3d4f42ab4", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8f7428ce36f17151468f80d859dfc2e103afb7e5664e1568db6801deeab92448", Pod:"coredns-7c65d6cfc9-zvjr6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ceb9fc4dfa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.314 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.314 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" iface="eth0" netns="" Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.314 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.314 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.339 [INFO][5572] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" HandleID="k8s-pod-network.7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.339 [INFO][5572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.339 [INFO][5572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.349 [WARNING][5572] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" HandleID="k8s-pod-network.7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.350 [INFO][5572] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" HandleID="k8s-pod-network.7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Workload="localhost-k8s-coredns--7c65d6cfc9--zvjr6-eth0" Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.352 [INFO][5572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:28.357003 containerd[1427]: 2025-08-12 23:57:28.354 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e" Aug 12 23:57:28.357003 containerd[1427]: time="2025-08-12T23:57:28.356499734Z" level=info msg="TearDown network for sandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\" successfully" Aug 12 23:57:28.377642 containerd[1427]: time="2025-08-12T23:57:28.377567855Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:57:28.377806 containerd[1427]: time="2025-08-12T23:57:28.377695455Z" level=info msg="RemovePodSandbox \"7e4658044ea54f16bfb5327b7660bcfea96a97653ec162a5b4b67931c051c93e\" returns successfully" Aug 12 23:57:28.378271 containerd[1427]: time="2025-08-12T23:57:28.378245415Z" level=info msg="StopPodSandbox for \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\"" Aug 12 23:57:28.442088 sshd[5517]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:28.460926 systemd[1]: sshd@13-10.0.0.26:22-10.0.0.1:59172.service: Deactivated successfully. Aug 12 23:57:28.467140 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:57:28.469132 systemd-logind[1412]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.420 [WARNING][5591] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"37a2d288-1878-44b7-b193-9a00f02f16f9", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559", Pod:"goldmane-58fd7646b9-j5rs8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibb6ecebffae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.421 [INFO][5591] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.421 [INFO][5591] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" iface="eth0" netns="" Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.421 [INFO][5591] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.421 [INFO][5591] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.442 [INFO][5599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" HandleID="k8s-pod-network.1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.442 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.442 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.455 [WARNING][5599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" HandleID="k8s-pod-network.1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.455 [INFO][5599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" HandleID="k8s-pod-network.1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.456 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:28.471135 containerd[1427]: 2025-08-12 23:57:28.460 [INFO][5591] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:28.471135 containerd[1427]: time="2025-08-12T23:57:28.470167336Z" level=info msg="TearDown network for sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\" successfully" Aug 12 23:57:28.471135 containerd[1427]: time="2025-08-12T23:57:28.470191856Z" level=info msg="StopPodSandbox for \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\" returns successfully" Aug 12 23:57:28.473192 containerd[1427]: time="2025-08-12T23:57:28.473158136Z" level=info msg="RemovePodSandbox for \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\"" Aug 12 23:57:28.473250 containerd[1427]: time="2025-08-12T23:57:28.473199616Z" level=info msg="Forcibly stopping sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\"" Aug 12 23:57:28.475733 systemd[1]: Started sshd@14-10.0.0.26:22-10.0.0.1:59178.service - OpenSSH per-connection server daemon (10.0.0.1:59178). Aug 12 23:57:28.477134 systemd-logind[1412]: Removed session 14. Aug 12 23:57:28.524494 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 59178 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:28.526501 sshd[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:28.531518 systemd-logind[1412]: New session 15 of user core. Aug 12 23:57:28.536109 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.516 [WARNING][5620] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"37a2d288-1878-44b7-b193-9a00f02f16f9", ResourceVersion:"1132", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df321fb44524762a4ae720892e295e714d29843d4063a5a87cca330046a3a559", Pod:"goldmane-58fd7646b9-j5rs8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibb6ecebffae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.516 [INFO][5620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.516 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" iface="eth0" netns="" Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.516 [INFO][5620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.516 [INFO][5620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.538 [INFO][5629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" HandleID="k8s-pod-network.1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.538 [INFO][5629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.538 [INFO][5629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.547 [WARNING][5629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" HandleID="k8s-pod-network.1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.547 [INFO][5629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" HandleID="k8s-pod-network.1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Workload="localhost-k8s-goldmane--58fd7646b9--j5rs8-eth0" Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.549 [INFO][5629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:28.552806 containerd[1427]: 2025-08-12 23:57:28.551 [INFO][5620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51" Aug 12 23:57:28.553226 containerd[1427]: time="2025-08-12T23:57:28.552842057Z" level=info msg="TearDown network for sandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\" successfully" Aug 12 23:57:28.555879 containerd[1427]: time="2025-08-12T23:57:28.555838137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:57:28.555968 containerd[1427]: time="2025-08-12T23:57:28.555911497Z" level=info msg="RemovePodSandbox \"1bae862bfd1bb85aa32b1d4cb4d6df2f15e4912112cc0271b577d2707e5b0b51\" returns successfully" Aug 12 23:57:28.556433 containerd[1427]: time="2025-08-12T23:57:28.556383537Z" level=info msg="StopPodSandbox for \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\"" Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.588 [WARNING][5648] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w79sn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0354a12-b391-42fd-af3e-1ce2798bd729", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6", Pod:"csi-node-driver-w79sn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc86096231a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.589 [INFO][5648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.589 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" iface="eth0" netns="" Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.589 [INFO][5648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.589 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.608 [INFO][5658] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" HandleID="k8s-pod-network.0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.608 [INFO][5658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.608 [INFO][5658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.620 [WARNING][5658] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" HandleID="k8s-pod-network.0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.620 [INFO][5658] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" HandleID="k8s-pod-network.0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.622 [INFO][5658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:28.626125 containerd[1427]: 2025-08-12 23:57:28.624 [INFO][5648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:28.626694 containerd[1427]: time="2025-08-12T23:57:28.626161898Z" level=info msg="TearDown network for sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\" successfully" Aug 12 23:57:28.626694 containerd[1427]: time="2025-08-12T23:57:28.626186858Z" level=info msg="StopPodSandbox for \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\" returns successfully" Aug 12 23:57:28.627206 containerd[1427]: time="2025-08-12T23:57:28.627163418Z" level=info msg="RemovePodSandbox for \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\"" Aug 12 23:57:28.627206 containerd[1427]: time="2025-08-12T23:57:28.627198298Z" level=info msg="Forcibly stopping sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\"" Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.661 [WARNING][5681] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w79sn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0354a12-b391-42fd-af3e-1ce2798bd729", ResourceVersion:"1103", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e938661f39c85034fe1ca6ba0933f028d1a6433e93917b585c18dbc343eb0de6", Pod:"csi-node-driver-w79sn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibc86096231a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.661 [INFO][5681] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.661 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" iface="eth0" netns="" Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.661 [INFO][5681] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.661 [INFO][5681] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.680 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" HandleID="k8s-pod-network.0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.680 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.680 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.689 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" HandleID="k8s-pod-network.0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.689 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" HandleID="k8s-pod-network.0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Workload="localhost-k8s-csi--node--driver--w79sn-eth0" Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.691 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:28.694399 containerd[1427]: 2025-08-12 23:57:28.692 [INFO][5681] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04" Aug 12 23:57:28.694399 containerd[1427]: time="2025-08-12T23:57:28.694361139Z" level=info msg="TearDown network for sandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\" successfully" Aug 12 23:57:28.700384 containerd[1427]: time="2025-08-12T23:57:28.700324219Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:57:28.700477 containerd[1427]: time="2025-08-12T23:57:28.700408019Z" level=info msg="RemovePodSandbox \"0b1306ce494d1cd88e5f7de76e0e4c03127e7fa106e342cb5dc008a5fb7a1d04\" returns successfully" Aug 12 23:57:28.700885 containerd[1427]: time="2025-08-12T23:57:28.700856139Z" level=info msg="StopPodSandbox for \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\"" Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.737 [WARNING][5708] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0", GenerateName:"calico-apiserver-7cfd595b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"2291631d-1fc3-422f-a756-efdd85b1d503", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd595b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c", Pod:"calico-apiserver-7cfd595b89-s4blw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali844968967e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.737 [INFO][5708] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.737 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" iface="eth0" netns="" Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.737 [INFO][5708] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.737 [INFO][5708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.759 [INFO][5716] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" HandleID="k8s-pod-network.704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.759 [INFO][5716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.759 [INFO][5716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.771 [WARNING][5716] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" HandleID="k8s-pod-network.704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.771 [INFO][5716] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" HandleID="k8s-pod-network.704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.772 [INFO][5716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:28.776147 containerd[1427]: 2025-08-12 23:57:28.774 [INFO][5708] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:28.777006 containerd[1427]: time="2025-08-12T23:57:28.776192260Z" level=info msg="TearDown network for sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\" successfully" Aug 12 23:57:28.777006 containerd[1427]: time="2025-08-12T23:57:28.776216700Z" level=info msg="StopPodSandbox for \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\" returns successfully" Aug 12 23:57:28.777006 containerd[1427]: time="2025-08-12T23:57:28.776864220Z" level=info msg="RemovePodSandbox for \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\"" Aug 12 23:57:28.777006 containerd[1427]: time="2025-08-12T23:57:28.776891380Z" level=info msg="Forcibly stopping sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\"" Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.813 [WARNING][5733] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0", GenerateName:"calico-apiserver-7cfd595b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"2291631d-1fc3-422f-a756-efdd85b1d503", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd595b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"246f1adbb1e0a65c77106ef1c7a35f84bf823f8a6c8a5f73934bd707ba63cf5c", Pod:"calico-apiserver-7cfd595b89-s4blw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali844968967e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.813 [INFO][5733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.813 [INFO][5733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" iface="eth0" netns="" Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.813 [INFO][5733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.813 [INFO][5733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.833 [INFO][5742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" HandleID="k8s-pod-network.704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.833 [INFO][5742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.833 [INFO][5742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.842 [WARNING][5742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" HandleID="k8s-pod-network.704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.842 [INFO][5742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" HandleID="k8s-pod-network.704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Workload="localhost-k8s-calico--apiserver--7cfd595b89--s4blw-eth0" Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.844 [INFO][5742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:28.847586 containerd[1427]: 2025-08-12 23:57:28.846 [INFO][5733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513" Aug 12 23:57:28.848070 containerd[1427]: time="2025-08-12T23:57:28.847617501Z" level=info msg="TearDown network for sandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\" successfully" Aug 12 23:57:28.850544 containerd[1427]: time="2025-08-12T23:57:28.850505701Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:57:28.850624 containerd[1427]: time="2025-08-12T23:57:28.850581421Z" level=info msg="RemovePodSandbox \"704defeb1ef735ff4d89a29cce9b24043a47ad565c3d19165d7eb0dddc580513\" returns successfully" Aug 12 23:57:28.851077 containerd[1427]: time="2025-08-12T23:57:28.851045141Z" level=info msg="StopPodSandbox for \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\"" Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.884 [WARNING][5759] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--777z4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b91aefd7-fa4c-469b-969b-9459b2f96cc9", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5", Pod:"coredns-7c65d6cfc9-777z4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7453d490b7b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.884 [INFO][5759] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.884 [INFO][5759] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" iface="eth0" netns="" Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.884 [INFO][5759] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.884 [INFO][5759] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.904 [INFO][5767] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" HandleID="k8s-pod-network.c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.905 [INFO][5767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.905 [INFO][5767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.919 [WARNING][5767] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" HandleID="k8s-pod-network.c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.919 [INFO][5767] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" HandleID="k8s-pod-network.c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.921 [INFO][5767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:28.926267 containerd[1427]: 2025-08-12 23:57:28.924 [INFO][5759] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:28.926267 containerd[1427]: time="2025-08-12T23:57:28.926217182Z" level=info msg="TearDown network for sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\" successfully" Aug 12 23:57:28.926920 containerd[1427]: time="2025-08-12T23:57:28.926245182Z" level=info msg="StopPodSandbox for \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\" returns successfully" Aug 12 23:57:28.927410 containerd[1427]: time="2025-08-12T23:57:28.927303342Z" level=info msg="RemovePodSandbox for \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\"" Aug 12 23:57:28.927486 containerd[1427]: time="2025-08-12T23:57:28.927433702Z" level=info msg="Forcibly stopping sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\"" Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.051 [WARNING][5788] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--777z4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"b91aefd7-fa4c-469b-969b-9459b2f96cc9", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c075aeb1262c01afc6973a1f7a158d9e2af00a1d2d07ecad55a0dfdda4dc27c5", Pod:"coredns-7c65d6cfc9-777z4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7453d490b7b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.051 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.051 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" iface="eth0" netns="" Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.051 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.051 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.077 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" HandleID="k8s-pod-network.c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.077 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.077 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.086 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" HandleID="k8s-pod-network.c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.086 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" HandleID="k8s-pod-network.c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Workload="localhost-k8s-coredns--7c65d6cfc9--777z4-eth0" Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.090 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:29.096803 containerd[1427]: 2025-08-12 23:57:29.094 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788" Aug 12 23:57:29.096803 containerd[1427]: time="2025-08-12T23:57:29.096766264Z" level=info msg="TearDown network for sandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\" successfully" Aug 12 23:57:29.104337 containerd[1427]: time="2025-08-12T23:57:29.104271864Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:57:29.104512 containerd[1427]: time="2025-08-12T23:57:29.104357624Z" level=info msg="RemovePodSandbox \"c36b0d4715e0bea1520e632fb693201121f19e410b3260f7da364addf0018788\" returns successfully" Aug 12 23:57:29.104805 containerd[1427]: time="2025-08-12T23:57:29.104776184Z" level=info msg="StopPodSandbox for \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\"" Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.147 [WARNING][5814] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" WorkloadEndpoint="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.147 [INFO][5814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.147 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" iface="eth0" netns="" Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.147 [INFO][5814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.147 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.172 [INFO][5822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" HandleID="k8s-pod-network.70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Workload="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.172 [INFO][5822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.172 [INFO][5822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.180 [WARNING][5822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" HandleID="k8s-pod-network.70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Workload="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.181 [INFO][5822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" HandleID="k8s-pod-network.70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Workload="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.182 [INFO][5822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:29.186131 containerd[1427]: 2025-08-12 23:57:29.184 [INFO][5814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:29.186933 containerd[1427]: time="2025-08-12T23:57:29.186178625Z" level=info msg="TearDown network for sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\" successfully" Aug 12 23:57:29.186933 containerd[1427]: time="2025-08-12T23:57:29.186205585Z" level=info msg="StopPodSandbox for \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\" returns successfully" Aug 12 23:57:29.187374 containerd[1427]: time="2025-08-12T23:57:29.187081385Z" level=info msg="RemovePodSandbox for \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\"" Aug 12 23:57:29.187374 containerd[1427]: time="2025-08-12T23:57:29.187121985Z" level=info msg="Forcibly stopping sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\"" Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.230 [WARNING][5839] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" WorkloadEndpoint="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.230 [INFO][5839] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.230 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" iface="eth0" netns="" Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.230 [INFO][5839] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.230 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.256 [INFO][5847] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" HandleID="k8s-pod-network.70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Workload="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.256 [INFO][5847] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.257 [INFO][5847] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.271 [WARNING][5847] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" HandleID="k8s-pod-network.70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Workload="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.271 [INFO][5847] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" HandleID="k8s-pod-network.70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Workload="localhost-k8s-whisker--558b58668--tz428-eth0" Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.274 [INFO][5847] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:29.284351 containerd[1427]: 2025-08-12 23:57:29.282 [INFO][5839] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e" Aug 12 23:57:29.284720 containerd[1427]: time="2025-08-12T23:57:29.284390427Z" level=info msg="TearDown network for sandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\" successfully" Aug 12 23:57:29.308096 containerd[1427]: time="2025-08-12T23:57:29.308023547Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:57:29.308252 containerd[1427]: time="2025-08-12T23:57:29.308116987Z" level=info msg="RemovePodSandbox \"70f6d7b061eac74525e834990c2d8980a009747863aafda7f61f75f8193d066e\" returns successfully" Aug 12 23:57:29.309594 containerd[1427]: time="2025-08-12T23:57:29.309564227Z" level=info msg="StopPodSandbox for \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\"" Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.347 [WARNING][5866] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0", GenerateName:"calico-apiserver-7cfd595b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0d70758-a4a3-42e9-8e23-756a91ca0f83", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd595b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771", Pod:"calico-apiserver-7cfd595b89-dfkdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64f83a8d707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.348 [INFO][5866] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.348 [INFO][5866] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" iface="eth0" netns="" Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.348 [INFO][5866] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.348 [INFO][5866] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.369 [INFO][5875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" HandleID="k8s-pod-network.6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.369 [INFO][5875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.370 [INFO][5875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.378 [WARNING][5875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" HandleID="k8s-pod-network.6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.378 [INFO][5875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" HandleID="k8s-pod-network.6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.379 [INFO][5875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:29.383673 containerd[1427]: 2025-08-12 23:57:29.381 [INFO][5866] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:29.383673 containerd[1427]: time="2025-08-12T23:57:29.383658708Z" level=info msg="TearDown network for sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\" successfully" Aug 12 23:57:29.384237 containerd[1427]: time="2025-08-12T23:57:29.383687828Z" level=info msg="StopPodSandbox for \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\" returns successfully" Aug 12 23:57:29.384716 containerd[1427]: time="2025-08-12T23:57:29.384473508Z" level=info msg="RemovePodSandbox for \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\"" Aug 12 23:57:29.384716 containerd[1427]: time="2025-08-12T23:57:29.384507468Z" level=info msg="Forcibly stopping sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\"" Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.420 [WARNING][5892] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0", GenerateName:"calico-apiserver-7cfd595b89-", Namespace:"calico-apiserver", SelfLink:"", UID:"d0d70758-a4a3-42e9-8e23-756a91ca0f83", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cfd595b89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"253dc7ef4a4a7d105efaff876f43a709fffe2c90e196f9da33fe356717486771", Pod:"calico-apiserver-7cfd595b89-dfkdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali64f83a8d707", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.421 [INFO][5892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.421 [INFO][5892] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" iface="eth0" netns="" Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.421 [INFO][5892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.421 [INFO][5892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.441 [INFO][5901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" HandleID="k8s-pod-network.6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.441 [INFO][5901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.441 [INFO][5901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.451 [WARNING][5901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" HandleID="k8s-pod-network.6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.451 [INFO][5901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" HandleID="k8s-pod-network.6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Workload="localhost-k8s-calico--apiserver--7cfd595b89--dfkdk-eth0" Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.452 [INFO][5901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:29.456818 containerd[1427]: 2025-08-12 23:57:29.454 [INFO][5892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc" Aug 12 23:57:29.456818 containerd[1427]: time="2025-08-12T23:57:29.456776229Z" level=info msg="TearDown network for sandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\" successfully" Aug 12 23:57:29.460017 containerd[1427]: time="2025-08-12T23:57:29.459853029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:57:29.460017 containerd[1427]: time="2025-08-12T23:57:29.460009909Z" level=info msg="RemovePodSandbox \"6164941f62decaa11c516a3ef2a656046443293be321bdc5ae1034989bf617dc\" returns successfully" Aug 12 23:57:29.461510 containerd[1427]: time="2025-08-12T23:57:29.461216629Z" level=info msg="StopPodSandbox for \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\"" Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.518 [WARNING][5918] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0", GenerateName:"calico-kube-controllers-597d6cb8d8-", Namespace:"calico-system", SelfLink:"", UID:"ad64c6a3-30ab-4a17-9b28-e049d25c1e5b", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597d6cb8d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16", Pod:"calico-kube-controllers-597d6cb8d8-g8nzb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80f4d5d483f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.518 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.518 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" iface="eth0" netns="" Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.518 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.518 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.539 [INFO][5928] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" HandleID="k8s-pod-network.406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.539 [INFO][5928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.539 [INFO][5928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.547 [WARNING][5928] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" HandleID="k8s-pod-network.406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.547 [INFO][5928] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" HandleID="k8s-pod-network.406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.549 [INFO][5928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:29.553474 containerd[1427]: 2025-08-12 23:57:29.551 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:29.553913 containerd[1427]: time="2025-08-12T23:57:29.553509270Z" level=info msg="TearDown network for sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\" successfully" Aug 12 23:57:29.553913 containerd[1427]: time="2025-08-12T23:57:29.553535790Z" level=info msg="StopPodSandbox for \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\" returns successfully" Aug 12 23:57:29.554218 containerd[1427]: time="2025-08-12T23:57:29.554155430Z" level=info msg="RemovePodSandbox for \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\"" Aug 12 23:57:29.554265 containerd[1427]: time="2025-08-12T23:57:29.554238510Z" level=info msg="Forcibly stopping sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\"" Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.591 [WARNING][5944] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0", GenerateName:"calico-kube-controllers-597d6cb8d8-", Namespace:"calico-system", SelfLink:"", UID:"ad64c6a3-30ab-4a17-9b28-e049d25c1e5b", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 56, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"597d6cb8d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7710d04ba4970e99ef715cf1d4b0995297dc1cc8c3f6d3f5bae6b1dfda742c16", Pod:"calico-kube-controllers-597d6cb8d8-g8nzb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali80f4d5d483f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.591 [INFO][5944] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.591 [INFO][5944] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" iface="eth0" netns="" Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.591 [INFO][5944] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.591 [INFO][5944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.615 [INFO][5953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" HandleID="k8s-pod-network.406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.615 [INFO][5953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.615 [INFO][5953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.624 [WARNING][5953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" HandleID="k8s-pod-network.406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.624 [INFO][5953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" HandleID="k8s-pod-network.406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Workload="localhost-k8s-calico--kube--controllers--597d6cb8d8--g8nzb-eth0" Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.626 [INFO][5953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:57:29.630052 containerd[1427]: 2025-08-12 23:57:29.628 [INFO][5944] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a" Aug 12 23:57:29.631114 containerd[1427]: time="2025-08-12T23:57:29.630655991Z" level=info msg="TearDown network for sandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\" successfully" Aug 12 23:57:29.634605 containerd[1427]: time="2025-08-12T23:57:29.634360591Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 12 23:57:29.634605 containerd[1427]: time="2025-08-12T23:57:29.634442831Z" level=info msg="RemovePodSandbox \"406b51cac27f2a9a93e3cc14efd499791d8f445dac04f2ff32803a1248b8fd9a\" returns successfully" Aug 12 23:57:30.328337 sshd[5609]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:30.342387 systemd[1]: sshd@14-10.0.0.26:22-10.0.0.1:59178.service: Deactivated successfully. Aug 12 23:57:30.344350 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:57:30.346216 systemd-logind[1412]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:57:30.355444 systemd[1]: Started sshd@15-10.0.0.26:22-10.0.0.1:59186.service - OpenSSH per-connection server daemon (10.0.0.1:59186). Aug 12 23:57:30.357027 systemd-logind[1412]: Removed session 15. Aug 12 23:57:30.395495 sshd[5967]: Accepted publickey for core from 10.0.0.1 port 59186 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:30.397253 sshd[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:30.402649 systemd-logind[1412]: New session 16 of user core. Aug 12 23:57:30.414188 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 12 23:57:30.905981 sshd[5967]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:30.918701 systemd[1]: sshd@15-10.0.0.26:22-10.0.0.1:59186.service: Deactivated successfully. Aug 12 23:57:30.921443 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:57:30.923250 systemd-logind[1412]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:57:30.932382 systemd[1]: Started sshd@16-10.0.0.26:22-10.0.0.1:59188.service - OpenSSH per-connection server daemon (10.0.0.1:59188). Aug 12 23:57:30.937538 systemd-logind[1412]: Removed session 16. Aug 12 23:57:30.982602 sshd[6001]: Accepted publickey for core from 10.0.0.1 port 59188 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:30.984283 sshd[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:30.988493 systemd-logind[1412]: New session 17 of user core. Aug 12 23:57:30.997186 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 12 23:57:31.134767 sshd[6001]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:31.138600 systemd-logind[1412]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:57:31.138991 systemd[1]: sshd@16-10.0.0.26:22-10.0.0.1:59188.service: Deactivated successfully. Aug 12 23:57:31.140783 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:57:31.141629 systemd-logind[1412]: Removed session 17. Aug 12 23:57:34.878993 kubelet[2453]: I0812 23:57:34.878483 2453 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:57:36.163936 systemd[1]: Started sshd@17-10.0.0.26:22-10.0.0.1:39458.service - OpenSSH per-connection server daemon (10.0.0.1:39458). Aug 12 23:57:36.205527 sshd[6024]: Accepted publickey for core from 10.0.0.1 port 39458 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:36.207277 sshd[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:36.211892 systemd-logind[1412]: New session 18 of user core. Aug 12 23:57:36.220189 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 12 23:57:36.342362 sshd[6024]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:36.345286 systemd[1]: sshd@17-10.0.0.26:22-10.0.0.1:39458.service: Deactivated successfully. Aug 12 23:57:36.347414 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:57:36.350236 systemd-logind[1412]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:57:36.351330 systemd-logind[1412]: Removed session 18. Aug 12 23:57:40.176517 kubelet[2453]: E0812 23:57:40.176448 2453 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 12 23:57:41.358306 systemd[1]: Started sshd@18-10.0.0.26:22-10.0.0.1:39460.service - OpenSSH per-connection server daemon (10.0.0.1:39460). Aug 12 23:57:41.401813 sshd[6087]: Accepted publickey for core from 10.0.0.1 port 39460 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:41.403095 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:41.407501 systemd-logind[1412]: New session 19 of user core. Aug 12 23:57:41.424269 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 12 23:57:41.559515 sshd[6087]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:41.564603 systemd[1]: sshd@18-10.0.0.26:22-10.0.0.1:39460.service: Deactivated successfully. Aug 12 23:57:41.567920 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:57:41.569586 systemd-logind[1412]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:57:41.571729 systemd-logind[1412]: Removed session 19. Aug 12 23:57:46.573815 systemd[1]: Started sshd@19-10.0.0.26:22-10.0.0.1:59000.service - OpenSSH per-connection server daemon (10.0.0.1:59000). Aug 12 23:57:46.641755 sshd[6128]: Accepted publickey for core from 10.0.0.1 port 59000 ssh2: RSA SHA256:xv2nBVgCAUDE9/psT+0gyR3NWqhWRcWqt2l4ADAtRXs Aug 12 23:57:46.644431 sshd[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:57:46.655395 systemd-logind[1412]: New session 20 of user core. Aug 12 23:57:46.671228 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 12 23:57:46.850752 sshd[6128]: pam_unix(sshd:session): session closed for user core Aug 12 23:57:46.856610 systemd[1]: sshd@19-10.0.0.26:22-10.0.0.1:59000.service: Deactivated successfully. Aug 12 23:57:46.861779 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:57:46.863430 systemd-logind[1412]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:57:46.864903 systemd-logind[1412]: Removed session 20.