Sep 10 00:20:13.847431 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 00:20:13.847453 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Sep 9 22:41:53 -00 2025 Sep 10 00:20:13.847463 kernel: KASLR enabled Sep 10 00:20:13.847469 kernel: efi: EFI v2.7 by EDK II Sep 10 00:20:13.847475 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 10 00:20:13.847480 kernel: random: crng init done Sep 10 00:20:13.847488 kernel: ACPI: Early table checksum verification disabled Sep 10 00:20:13.847493 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 10 00:20:13.847500 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 00:20:13.847507 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:20:13.847513 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:20:13.847519 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:20:13.847525 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:20:13.847531 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:20:13.847539 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:20:13.847546 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:20:13.847553 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:20:13.847559 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:20:13.847566 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 00:20:13.847572 kernel: NUMA: Failed to initialise from firmware Sep 10 00:20:13.847578 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:20:13.847585 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 10 00:20:13.847591 kernel: Zone ranges: Sep 10 00:20:13.847597 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:20:13.847603 kernel: DMA32 empty Sep 10 00:20:13.847610 kernel: Normal empty Sep 10 00:20:13.847616 kernel: Movable zone start for each node Sep 10 00:20:13.847623 kernel: Early memory node ranges Sep 10 00:20:13.847629 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 10 00:20:13.847636 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 10 00:20:13.847643 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 10 00:20:13.847649 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 00:20:13.847655 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 00:20:13.847662 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 00:20:13.847668 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 00:20:13.847675 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 00:20:13.847681 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 00:20:13.847688 kernel: psci: probing for conduit method from ACPI. Sep 10 00:20:13.847695 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 00:20:13.847701 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 00:20:13.847710 kernel: psci: Trusted OS migration not required Sep 10 00:20:13.847716 kernel: psci: SMC Calling Convention v1.1 Sep 10 00:20:13.847723 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 00:20:13.847731 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 10 00:20:13.847738 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 10 00:20:13.847745 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 00:20:13.847752 kernel: Detected PIPT I-cache on CPU0 Sep 10 00:20:13.847759 kernel: CPU features: detected: GIC system register CPU interface Sep 10 00:20:13.847766 kernel: CPU features: detected: Hardware dirty bit management Sep 10 00:20:13.847773 kernel: CPU features: detected: Spectre-v4 Sep 10 00:20:13.847780 kernel: CPU features: detected: Spectre-BHB Sep 10 00:20:13.847787 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 00:20:13.847794 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 00:20:13.847802 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 00:20:13.847809 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 00:20:13.847815 kernel: alternatives: applying boot alternatives Sep 10 00:20:13.847823 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9519a2b52292e68cf8bced92b7c71fffa7243efe8697174d43c360b4308144c8 Sep 10 00:20:13.847830 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:20:13.847837 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:20:13.847844 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:20:13.847851 kernel: Fallback order for Node 0: 0 Sep 10 00:20:13.847857 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 10 00:20:13.847864 kernel: Policy zone: DMA Sep 10 00:20:13.847871 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:20:13.847879 kernel: software IO TLB: area num 4. Sep 10 00:20:13.847886 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 10 00:20:13.847893 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Sep 10 00:20:13.847900 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:20:13.847907 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 00:20:13.847914 kernel: rcu: RCU event tracing is enabled. Sep 10 00:20:13.847921 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:20:13.847928 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 00:20:13.847935 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:20:13.847942 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:20:13.847949 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:20:13.847956 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 00:20:13.847963 kernel: GICv3: 256 SPIs implemented Sep 10 00:20:13.847970 kernel: GICv3: 0 Extended SPIs implemented Sep 10 00:20:13.847976 kernel: Root IRQ handler: gic_handle_irq Sep 10 00:20:13.847983 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 00:20:13.847990 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 00:20:13.847997 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 00:20:13.848004 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 10 00:20:13.848011 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 10 00:20:13.848017 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 10 00:20:13.848032 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 10 00:20:13.848154 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 00:20:13.848165 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:20:13.848181 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 00:20:13.848189 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 00:20:13.848196 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 00:20:13.848202 kernel: arm-pv: using stolen time PV Sep 10 00:20:13.848210 kernel: Console: colour dummy device 80x25 Sep 10 00:20:13.848217 kernel: ACPI: Core revision 20230628 Sep 10 00:20:13.848224 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 00:20:13.848230 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:20:13.848237 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 00:20:13.848246 kernel: landlock: Up and running. Sep 10 00:20:13.848253 kernel: SELinux: Initializing. Sep 10 00:20:13.848260 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:20:13.848267 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:20:13.848274 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:20:13.848281 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:20:13.848288 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:20:13.848295 kernel: rcu: Max phase no-delay instances is 400. Sep 10 00:20:13.848303 kernel: Platform MSI: ITS@0x8080000 domain created Sep 10 00:20:13.848312 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 10 00:20:13.848319 kernel: Remapping and enabling EFI services. Sep 10 00:20:13.848325 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:20:13.848332 kernel: Detected PIPT I-cache on CPU1 Sep 10 00:20:13.848339 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 00:20:13.848346 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 10 00:20:13.848353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:20:13.848360 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 00:20:13.848367 kernel: Detected PIPT I-cache on CPU2 Sep 10 00:20:13.848374 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 00:20:13.848382 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 10 00:20:13.848389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:20:13.848401 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 00:20:13.848410 kernel: Detected PIPT I-cache on CPU3 Sep 10 00:20:13.848417 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 00:20:13.848424 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 10 00:20:13.848431 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 00:20:13.848438 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 00:20:13.848446 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:20:13.848454 kernel: SMP: Total of 4 processors activated. Sep 10 00:20:13.848461 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 00:20:13.848469 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 00:20:13.848476 kernel: CPU features: detected: Common not Private translations Sep 10 00:20:13.848483 kernel: CPU features: detected: CRC32 instructions Sep 10 00:20:13.848490 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 00:20:13.848498 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 00:20:13.848505 kernel: CPU features: detected: LSE atomic instructions Sep 10 00:20:13.848513 kernel: CPU features: detected: Privileged Access Never Sep 10 00:20:13.848521 kernel: CPU features: detected: RAS Extension Support Sep 10 00:20:13.848528 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 00:20:13.848535 kernel: CPU: All CPU(s) started at EL1 Sep 10 00:20:13.848543 kernel: alternatives: applying system-wide alternatives Sep 10 00:20:13.848550 kernel: devtmpfs: initialized Sep 10 00:20:13.848557 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:20:13.848565 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:20:13.848572 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:20:13.848580 kernel: SMBIOS 3.0.0 present. Sep 10 00:20:13.848588 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 10 00:20:13.848595 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:20:13.848602 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 00:20:13.848610 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 00:20:13.848617 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 00:20:13.848624 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:20:13.848632 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 10 00:20:13.848639 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:20:13.848648 kernel: cpuidle: using governor menu Sep 10 00:20:13.848655 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 00:20:13.848663 kernel: ASID allocator initialised with 32768 entries Sep 10 00:20:13.848670 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:20:13.848677 kernel: Serial: AMBA PL011 UART driver Sep 10 00:20:13.848684 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 00:20:13.848692 kernel: Modules: 0 pages in range for non-PLT usage Sep 10 00:20:13.848699 kernel: Modules: 509008 pages in range for PLT usage Sep 10 00:20:13.848706 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:20:13.848715 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 00:20:13.848722 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 00:20:13.848729 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 00:20:13.848737 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:20:13.848744 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 00:20:13.848751 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 00:20:13.848758 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 00:20:13.848765 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:20:13.848772 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:20:13.848781 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:20:13.848788 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:20:13.848795 kernel: ACPI: Interpreter enabled Sep 10 00:20:13.848803 kernel: ACPI: Using GIC for interrupt routing Sep 10 00:20:13.848810 kernel: ACPI: MCFG table detected, 1 entries Sep 10 00:20:13.848817 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 00:20:13.848824 kernel: printk: console [ttyAMA0] enabled Sep 10 00:20:13.848831 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:20:13.848973 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:20:13.849085 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 00:20:13.849159 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 00:20:13.849226 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 00:20:13.849333 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 00:20:13.849345 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 00:20:13.849353 kernel: PCI host bridge to bus 0000:00 Sep 10 00:20:13.849432 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 00:20:13.849499 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 00:20:13.849556 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 00:20:13.849614 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:20:13.849698 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 10 00:20:13.849773 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:20:13.849840 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 10 00:20:13.849908 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 10 00:20:13.849974 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 00:20:13.850085 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 00:20:13.850160 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 10 00:20:13.850242 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 10 00:20:13.850303 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 00:20:13.850361 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 00:20:13.850425 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 00:20:13.850436 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 00:20:13.850444 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 00:20:13.850452 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 00:20:13.850469 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 00:20:13.850478 kernel: iommu: Default domain type: Translated Sep 10 00:20:13.850485 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 00:20:13.850493 kernel: efivars: Registered efivars operations Sep 10 00:20:13.850500 kernel: vgaarb: loaded Sep 10 00:20:13.850509 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 00:20:13.850517 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:20:13.850525 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:20:13.850532 kernel: pnp: PnP ACPI init Sep 10 00:20:13.850608 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 00:20:13.850619 kernel: pnp: PnP ACPI: found 1 devices Sep 10 00:20:13.850627 kernel: NET: Registered PF_INET protocol family Sep 10 00:20:13.850634 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:20:13.850644 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:20:13.850651 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:20:13.850659 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:20:13.850667 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 00:20:13.850674 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:20:13.850681 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:20:13.850689 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:20:13.850696 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:20:13.850704 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:20:13.850712 kernel: kvm [1]: HYP mode not available Sep 10 00:20:13.850720 kernel: Initialise system trusted keyrings Sep 10 00:20:13.850727 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:20:13.850735 kernel: Key type asymmetric registered Sep 10 00:20:13.850742 kernel: Asymmetric key parser 'x509' registered Sep 10 00:20:13.850749 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 10 00:20:13.850757 kernel: io scheduler mq-deadline registered Sep 10 00:20:13.850764 kernel: io scheduler kyber registered Sep 10 00:20:13.850772 kernel: io scheduler bfq registered Sep 10 00:20:13.850781 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 00:20:13.850788 kernel: ACPI: button: Power Button [PWRB] Sep 10 00:20:13.850796 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 00:20:13.850865 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 00:20:13.850875 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:20:13.850883 kernel: thunder_xcv, ver 1.0 Sep 10 00:20:13.850891 kernel: thunder_bgx, ver 1.0 Sep 10 00:20:13.850898 kernel: nicpf, ver 1.0 Sep 10 00:20:13.850906 kernel: nicvf, ver 1.0 Sep 10 00:20:13.850982 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 00:20:13.851065 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T00:20:13 UTC (1757463613) Sep 10 00:20:13.851076 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 00:20:13.851084 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 10 00:20:13.851092 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 10 00:20:13.851099 kernel: watchdog: Hard watchdog permanently disabled Sep 10 00:20:13.851107 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:20:13.851114 kernel: Segment Routing with IPv6 Sep 10 00:20:13.851125 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:20:13.851133 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:20:13.851140 kernel: Key type dns_resolver registered Sep 10 00:20:13.851148 kernel: registered taskstats version 1 Sep 10 00:20:13.851155 kernel: Loading compiled-in X.509 certificates Sep 10 00:20:13.851163 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: e85a1044dffeb2f9696d4659bfe36fdfbb79b10c' Sep 10 00:20:13.851170 kernel: Key type .fscrypt registered Sep 10 00:20:13.851177 kernel: Key type fscrypt-provisioning registered Sep 10 00:20:13.851185 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:20:13.851193 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:20:13.851201 kernel: ima: No architecture policies found Sep 10 00:20:13.851208 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 00:20:13.851216 kernel: clk: Disabling unused clocks Sep 10 00:20:13.851223 kernel: Freeing unused kernel memory: 39424K Sep 10 00:20:13.851231 kernel: Run /init as init process Sep 10 00:20:13.851238 kernel: with arguments: Sep 10 00:20:13.851249 kernel: /init Sep 10 00:20:13.851256 kernel: with environment: Sep 10 00:20:13.851265 kernel: HOME=/ Sep 10 00:20:13.851273 kernel: TERM=linux Sep 10 00:20:13.851280 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:20:13.851289 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:20:13.851298 systemd[1]: Detected virtualization kvm. Sep 10 00:20:13.851307 systemd[1]: Detected architecture arm64. Sep 10 00:20:13.851314 systemd[1]: Running in initrd. Sep 10 00:20:13.851322 systemd[1]: No hostname configured, using default hostname. Sep 10 00:20:13.851331 systemd[1]: Hostname set to . Sep 10 00:20:13.851339 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:20:13.851347 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:20:13.851355 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:20:13.851363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:20:13.851372 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 00:20:13.851380 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:20:13.851390 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 00:20:13.851398 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 00:20:13.851407 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 00:20:13.851415 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 00:20:13.851423 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:20:13.851431 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:20:13.851439 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:20:13.851449 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:20:13.851457 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:20:13.851465 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:20:13.851472 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:20:13.851480 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:20:13.851489 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:20:13.851496 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:20:13.851505 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:20:13.851514 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:20:13.851524 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:20:13.851532 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:20:13.851540 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 00:20:13.851549 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:20:13.851557 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 00:20:13.851564 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:20:13.851572 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:20:13.851580 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:20:13.851590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:20:13.851598 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 00:20:13.851606 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:20:13.851614 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:20:13.851623 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:20:13.851650 systemd-journald[237]: Collecting audit messages is disabled. Sep 10 00:20:13.851669 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:20:13.851678 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:20:13.851686 systemd-journald[237]: Journal started Sep 10 00:20:13.851707 systemd-journald[237]: Runtime Journal (/run/log/journal/f1391b7fb4b54937bb2408ebba90f37e) is 5.9M, max 47.3M, 41.4M free. Sep 10 00:20:13.841204 systemd-modules-load[238]: Inserted module 'overlay' Sep 10 00:20:13.853638 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:20:13.856112 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:20:13.858007 systemd-modules-load[238]: Inserted module 'br_netfilter' Sep 10 00:20:13.858796 kernel: Bridge firewalling registered Sep 10 00:20:13.864191 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:20:13.865905 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:20:13.867759 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:20:13.870252 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:20:13.872315 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:20:13.878282 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:20:13.882134 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:20:13.886081 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:20:13.887261 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:20:13.904239 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 00:20:13.906356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:20:13.916035 dracut-cmdline[278]: dracut-dracut-053 Sep 10 00:20:13.918481 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9519a2b52292e68cf8bced92b7c71fffa7243efe8697174d43c360b4308144c8 Sep 10 00:20:13.932926 systemd-resolved[279]: Positive Trust Anchors: Sep 10 00:20:13.932947 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:20:13.932978 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:20:13.937833 systemd-resolved[279]: Defaulting to hostname 'linux'. Sep 10 00:20:13.938821 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:20:13.941137 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:20:13.983077 kernel: SCSI subsystem initialized Sep 10 00:20:13.987065 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:20:13.995085 kernel: iscsi: registered transport (tcp) Sep 10 00:20:14.007281 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:20:14.007301 kernel: QLogic iSCSI HBA Driver Sep 10 00:20:14.048717 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 00:20:14.056192 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 00:20:14.073665 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:20:14.073726 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:20:14.073737 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 00:20:14.119067 kernel: raid6: neonx8 gen() 15744 MB/s Sep 10 00:20:14.136059 kernel: raid6: neonx4 gen() 15669 MB/s Sep 10 00:20:14.153061 kernel: raid6: neonx2 gen() 13245 MB/s Sep 10 00:20:14.170058 kernel: raid6: neonx1 gen() 10520 MB/s Sep 10 00:20:14.187058 kernel: raid6: int64x8 gen() 6959 MB/s Sep 10 00:20:14.204059 kernel: raid6: int64x4 gen() 7325 MB/s Sep 10 00:20:14.221064 kernel: raid6: int64x2 gen() 6124 MB/s Sep 10 00:20:14.238067 kernel: raid6: int64x1 gen() 5055 MB/s Sep 10 00:20:14.238098 kernel: raid6: using algorithm neonx8 gen() 15744 MB/s Sep 10 00:20:14.255060 kernel: raid6: .... xor() 12038 MB/s, rmw enabled Sep 10 00:20:14.255074 kernel: raid6: using neon recovery algorithm Sep 10 00:20:14.260096 kernel: xor: measuring software checksum speed Sep 10 00:20:14.260121 kernel: 8regs : 19683 MB/sec Sep 10 00:20:14.261113 kernel: 32regs : 19636 MB/sec Sep 10 00:20:14.261125 kernel: arm64_neon : 27114 MB/sec Sep 10 00:20:14.261134 kernel: xor: using function: arm64_neon (27114 MB/sec) Sep 10 00:20:14.309072 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 00:20:14.319793 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:20:14.330184 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:20:14.342740 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 10 00:20:14.345843 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:20:14.348170 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 00:20:14.362510 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 10 00:20:14.387412 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:20:14.396172 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:20:14.436315 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:20:14.446213 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 00:20:14.458133 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 00:20:14.459413 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:20:14.460849 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:20:14.462748 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:20:14.471193 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 00:20:14.481567 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:20:14.488662 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 00:20:14.489585 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:20:14.491352 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:20:14.491381 kernel: GPT:9289727 != 19775487 Sep 10 00:20:14.491391 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:20:14.493065 kernel: GPT:9289727 != 19775487 Sep 10 00:20:14.493105 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:20:14.493117 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:20:14.495534 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:20:14.495644 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:20:14.498890 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:20:14.499761 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:20:14.499883 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:20:14.502427 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:20:14.509281 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:20:14.515076 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (512) Sep 10 00:20:14.524058 kernel: BTRFS: device fsid 56932cd9-691c-4ccb-8da6-e6508edf5f69 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (520) Sep 10 00:20:14.527591 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 00:20:14.529765 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:20:14.534734 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 00:20:14.539193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:20:14.545456 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 00:20:14.546407 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 00:20:14.563200 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 00:20:14.565123 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:20:14.570995 disk-uuid[550]: Primary Header is updated. Sep 10 00:20:14.570995 disk-uuid[550]: Secondary Entries is updated. Sep 10 00:20:14.570995 disk-uuid[550]: Secondary Header is updated. Sep 10 00:20:14.577072 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:20:14.581236 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:20:15.586068 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:20:15.586772 disk-uuid[554]: The operation has completed successfully. Sep 10 00:20:15.605678 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:20:15.605782 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 00:20:15.630205 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 00:20:15.633081 sh[573]: Success Sep 10 00:20:15.642084 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 10 00:20:15.681513 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 00:20:15.683188 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 00:20:15.684535 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 00:20:15.694409 kernel: BTRFS info (device dm-0): first mount of filesystem 56932cd9-691c-4ccb-8da6-e6508edf5f69 Sep 10 00:20:15.694441 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:20:15.694452 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 00:20:15.697291 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 00:20:15.697306 kernel: BTRFS info (device dm-0): using free space tree Sep 10 00:20:15.700772 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 00:20:15.702016 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 00:20:15.711195 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 00:20:15.712622 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 00:20:15.720626 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:20:15.720676 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:20:15.720687 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:20:15.724074 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:20:15.731393 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:20:15.732888 kernel: BTRFS info (device vda6): last unmount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:20:15.740307 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 00:20:15.749257 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 00:20:15.809128 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:20:15.821232 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:20:15.828513 ignition[666]: Ignition 2.19.0 Sep 10 00:20:15.828522 ignition[666]: Stage: fetch-offline Sep 10 00:20:15.828559 ignition[666]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:20:15.828566 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:20:15.828720 ignition[666]: parsed url from cmdline: "" Sep 10 00:20:15.828723 ignition[666]: no config URL provided Sep 10 00:20:15.828728 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:20:15.828734 ignition[666]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:20:15.828757 ignition[666]: op(1): [started] loading QEMU firmware config module Sep 10 00:20:15.828761 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:20:15.835018 ignition[666]: op(1): [finished] loading QEMU firmware config module Sep 10 00:20:15.840680 systemd-networkd[765]: lo: Link UP Sep 10 00:20:15.840692 systemd-networkd[765]: lo: Gained carrier Sep 10 00:20:15.841434 systemd-networkd[765]: Enumeration completed Sep 10 00:20:15.841851 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:20:15.841854 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:20:15.842094 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:20:15.842676 systemd-networkd[765]: eth0: Link UP Sep 10 00:20:15.842679 systemd-networkd[765]: eth0: Gained carrier Sep 10 00:20:15.842686 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:20:15.846059 systemd[1]: Reached target network.target - Network. Sep 10 00:20:15.866098 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:20:15.888324 ignition[666]: parsing config with SHA512: 07b7cc42a1fd9efa768bd777fd8e22a5511c621409c02e89897b0a2539804d4f131043e8f753f7daed8c6966dfee1f9b37dbb4074742e71cce30b21eaebb0f14 Sep 10 00:20:15.894137 unknown[666]: fetched base config from "system" Sep 10 00:20:15.894151 unknown[666]: fetched user config from "qemu" Sep 10 00:20:15.894796 ignition[666]: fetch-offline: fetch-offline passed Sep 10 00:20:15.894863 ignition[666]: Ignition finished successfully Sep 10 00:20:15.896620 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:20:15.899548 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:20:15.911205 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 00:20:15.922014 ignition[771]: Ignition 2.19.0 Sep 10 00:20:15.922025 ignition[771]: Stage: kargs Sep 10 00:20:15.922229 ignition[771]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:20:15.922239 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:20:15.923134 ignition[771]: kargs: kargs passed Sep 10 00:20:15.923177 ignition[771]: Ignition finished successfully Sep 10 00:20:15.927080 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 00:20:15.941234 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 00:20:15.950806 ignition[778]: Ignition 2.19.0 Sep 10 00:20:15.950816 ignition[778]: Stage: disks Sep 10 00:20:15.950992 ignition[778]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:20:15.951011 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:20:15.951891 ignition[778]: disks: disks passed Sep 10 00:20:15.953306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 00:20:15.951934 ignition[778]: Ignition finished successfully Sep 10 00:20:15.954314 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 00:20:15.955344 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:20:15.956939 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:20:15.958224 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:20:15.959685 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:20:15.962066 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 00:20:15.975402 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 00:20:15.981685 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 00:20:15.996172 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 00:20:16.036074 kernel: EXT4-fs (vda9): mounted filesystem 43028332-c79c-426f-8992-528d495eb356 r/w with ordered data mode. Quota mode: none. Sep 10 00:20:16.036586 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 00:20:16.037731 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 00:20:16.048138 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:20:16.049599 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 00:20:16.050586 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 00:20:16.050657 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:20:16.050715 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:20:16.057073 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (798) Sep 10 00:20:16.056674 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 00:20:16.058395 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 00:20:16.062744 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:20:16.062768 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:20:16.062779 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:20:16.066113 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:20:16.067492 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:20:16.095424 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:20:16.100328 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:20:16.104246 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:20:16.108164 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:20:16.179240 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 00:20:16.199217 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 00:20:16.203773 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 00:20:16.210076 kernel: BTRFS info (device vda6): last unmount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:20:16.224382 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 00:20:16.230294 ignition[914]: INFO : Ignition 2.19.0 Sep 10 00:20:16.230294 ignition[914]: INFO : Stage: mount Sep 10 00:20:16.231688 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:20:16.231688 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:20:16.231688 ignition[914]: INFO : mount: mount passed Sep 10 00:20:16.231688 ignition[914]: INFO : Ignition finished successfully Sep 10 00:20:16.233304 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 00:20:16.246150 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 00:20:16.693769 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 00:20:16.707215 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:20:16.713490 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (926) Sep 10 00:20:16.713525 kernel: BTRFS info (device vda6): first mount of filesystem 1f9a2be6-c1a7-433d-9dbe-1e5d2ce6fc09 Sep 10 00:20:16.713536 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 00:20:16.714172 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:20:16.717077 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:20:16.717597 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:20:16.733247 ignition[943]: INFO : Ignition 2.19.0 Sep 10 00:20:16.733247 ignition[943]: INFO : Stage: files Sep 10 00:20:16.734548 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:20:16.734548 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:20:16.734548 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:20:16.737420 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:20:16.737420 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:20:16.737420 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:20:16.737420 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:20:16.737420 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:20:16.737225 unknown[943]: wrote ssh authorized keys file for user: core Sep 10 00:20:16.743480 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 00:20:16.743480 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 10 00:20:16.799118 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 00:20:17.240113 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 00:20:17.240113 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:20:17.243779 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:20:17.243779 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:20:17.243779 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:20:17.243779 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:20:17.243779 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:20:17.243779 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:20:17.243779 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:20:17.253319 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:20:17.253319 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:20:17.253319 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 00:20:17.253319 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 00:20:17.253319 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 00:20:17.253319 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 10 00:20:17.675320 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 00:20:17.896300 systemd-networkd[765]: eth0: Gained IPv6LL Sep 10 00:20:18.440473 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 00:20:18.440473 ignition[943]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 00:20:18.443569 ignition[943]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:20:18.443569 ignition[943]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:20:18.443569 ignition[943]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 00:20:18.443569 ignition[943]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 00:20:18.443569 ignition[943]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:20:18.443569 ignition[943]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:20:18.443569 ignition[943]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 00:20:18.443569 ignition[943]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:20:18.477443 ignition[943]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:20:18.481738 ignition[943]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:20:18.481738 ignition[943]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:20:18.481738 ignition[943]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:20:18.481738 ignition[943]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:20:18.489219 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:20:18.489219 ignition[943]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:20:18.489219 ignition[943]: INFO : files: files passed Sep 10 00:20:18.489219 ignition[943]: INFO : Ignition finished successfully Sep 10 00:20:18.489092 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 00:20:18.498466 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 00:20:18.501235 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 00:20:18.503413 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:20:18.504199 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 00:20:18.508647 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 00:20:18.511424 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:20:18.511424 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:20:18.513950 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:20:18.513919 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:20:18.516249 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 00:20:18.521213 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 00:20:18.541571 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:20:18.542326 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 00:20:18.543501 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 00:20:18.544791 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 00:20:18.546184 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 00:20:18.548165 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 00:20:18.566607 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:20:18.573214 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 00:20:18.580557 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:20:18.581536 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:20:18.583040 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 00:20:18.584444 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:20:18.584555 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:20:18.586549 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 00:20:18.588013 systemd[1]: Stopped target basic.target - Basic System. Sep 10 00:20:18.589324 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 00:20:18.590630 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:20:18.592056 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 00:20:18.593636 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 00:20:18.594996 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:20:18.596740 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 00:20:18.598217 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 00:20:18.599524 systemd[1]: Stopped target swap.target - Swaps. Sep 10 00:20:18.600806 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:20:18.600916 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:20:18.602705 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:20:18.604291 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:20:18.605787 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 00:20:18.609113 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:20:18.610038 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:20:18.610163 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 00:20:18.612569 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:20:18.612681 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:20:18.614133 systemd[1]: Stopped target paths.target - Path Units. Sep 10 00:20:18.615359 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:20:18.616152 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:20:18.618211 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 00:20:18.619865 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 00:20:18.622828 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:20:18.622920 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:20:18.624489 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:20:18.624563 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:20:18.625809 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:20:18.625913 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:20:18.627294 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:20:18.627387 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 00:20:18.642817 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 00:20:18.644833 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 00:20:18.646361 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:20:18.647302 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:20:18.648222 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:20:18.648319 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:20:18.653281 ignition[998]: INFO : Ignition 2.19.0 Sep 10 00:20:18.653281 ignition[998]: INFO : Stage: umount Sep 10 00:20:18.653281 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:20:18.653281 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:20:18.656971 ignition[998]: INFO : umount: umount passed Sep 10 00:20:18.656971 ignition[998]: INFO : Ignition finished successfully Sep 10 00:20:18.656547 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:20:18.656837 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 00:20:18.660199 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:20:18.660277 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 00:20:18.662670 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:20:18.664009 systemd[1]: Stopped target network.target - Network. Sep 10 00:20:18.665445 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:20:18.665521 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 00:20:18.666936 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:20:18.666977 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 00:20:18.668439 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:20:18.668480 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 00:20:18.669850 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 00:20:18.669889 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 00:20:18.671388 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 00:20:18.672767 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 00:20:18.681106 systemd-networkd[765]: eth0: DHCPv6 lease lost Sep 10 00:20:18.682393 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:20:18.682496 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 00:20:18.684310 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:20:18.684421 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 00:20:18.686367 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:20:18.686418 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:20:18.699153 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 00:20:18.699805 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:20:18.699857 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:20:18.701443 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:20:18.701482 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:20:18.702910 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:20:18.702948 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 00:20:18.704587 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 00:20:18.704621 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:20:18.706157 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:20:18.715540 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:20:18.715638 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 00:20:18.724757 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:20:18.724897 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:20:18.727744 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:20:18.727835 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 00:20:18.729638 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:20:18.729694 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 00:20:18.730635 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:20:18.730667 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:20:18.731941 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:20:18.731988 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:20:18.734123 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:20:18.734163 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 00:20:18.736159 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:20:18.736197 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:20:18.738622 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:20:18.738662 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 00:20:18.752224 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 00:20:18.756733 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:20:18.756782 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:20:18.758486 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 00:20:18.758520 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:20:18.759968 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:20:18.760013 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:20:18.761741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:20:18.761776 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:20:18.763554 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:20:18.763623 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 00:20:18.767591 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 00:20:18.769483 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 00:20:18.778422 systemd[1]: Switching root. Sep 10 00:20:18.811646 systemd-journald[237]: Journal stopped Sep 10 00:20:19.438999 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 10 00:20:19.439068 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:20:19.439082 kernel: SELinux: policy capability open_perms=1 Sep 10 00:20:19.439092 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:20:19.439102 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:20:19.439111 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:20:19.439121 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:20:19.439130 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:20:19.439140 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:20:19.439150 kernel: audit: type=1403 audit(1757463618.944:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:20:19.439162 systemd[1]: Successfully loaded SELinux policy in 29.908ms. Sep 10 00:20:19.439182 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.184ms. Sep 10 00:20:19.439194 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:20:19.439205 systemd[1]: Detected virtualization kvm. Sep 10 00:20:19.439216 systemd[1]: Detected architecture arm64. Sep 10 00:20:19.439226 systemd[1]: Detected first boot. Sep 10 00:20:19.439236 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:20:19.439246 zram_generator::config[1046]: No configuration found. Sep 10 00:20:19.439259 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:20:19.439270 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 00:20:19.439280 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 00:20:19.439293 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 00:20:19.439305 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 00:20:19.439320 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 00:20:19.439331 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 00:20:19.439341 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 00:20:19.439353 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 00:20:19.439363 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 00:20:19.439374 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 00:20:19.439384 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 00:20:19.439394 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:20:19.439405 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:20:19.439416 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 00:20:19.439426 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 00:20:19.439437 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 00:20:19.439449 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:20:19.439460 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 00:20:19.439471 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:20:19.439481 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 00:20:19.439491 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 00:20:19.439502 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 00:20:19.439513 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 00:20:19.439554 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:20:19.439570 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:20:19.439581 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:20:19.439591 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:20:19.439601 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 00:20:19.439611 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 00:20:19.439622 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:20:19.439632 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:20:19.439642 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:20:19.439653 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 00:20:19.439665 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 00:20:19.439676 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 00:20:19.439686 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 00:20:19.439697 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 00:20:19.439707 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 00:20:19.439717 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 00:20:19.439728 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:20:19.439741 systemd[1]: Reached target machines.target - Containers. Sep 10 00:20:19.439772 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 00:20:19.439790 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:20:19.439800 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:20:19.439811 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 00:20:19.439821 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:20:19.439832 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:20:19.439842 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:20:19.439853 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 00:20:19.439864 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:20:19.439876 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:20:19.439887 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 00:20:19.439897 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 00:20:19.439908 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 00:20:19.439918 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 00:20:19.439929 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:20:19.439939 kernel: ACPI: bus type drm_connector registered Sep 10 00:20:19.439950 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:20:19.439961 kernel: fuse: init (API version 7.39) Sep 10 00:20:19.439972 kernel: loop: module loaded Sep 10 00:20:19.439992 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 00:20:19.440004 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 00:20:19.440014 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:20:19.440029 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 00:20:19.440039 systemd[1]: Stopped verity-setup.service. Sep 10 00:20:19.440058 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 00:20:19.440105 systemd-journald[1109]: Collecting audit messages is disabled. Sep 10 00:20:19.440135 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 00:20:19.440146 systemd-journald[1109]: Journal started Sep 10 00:20:19.440167 systemd-journald[1109]: Runtime Journal (/run/log/journal/f1391b7fb4b54937bb2408ebba90f37e) is 5.9M, max 47.3M, 41.4M free. Sep 10 00:20:19.267541 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:20:19.286838 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 00:20:19.287180 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 00:20:19.442324 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:20:19.442921 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 00:20:19.444135 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 00:20:19.445040 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 00:20:19.445936 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 00:20:19.448083 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 00:20:19.449196 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:20:19.450388 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:20:19.450523 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 00:20:19.451813 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:20:19.451941 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:20:19.453160 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:20:19.453285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:20:19.454326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:20:19.454469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:20:19.455653 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:20:19.455784 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 00:20:19.456900 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:20:19.457039 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:20:19.459168 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:20:19.460305 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 00:20:19.461457 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 00:20:19.473876 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 00:20:19.481159 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 00:20:19.483039 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 00:20:19.483860 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:20:19.483899 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:20:19.485660 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 10 00:20:19.487675 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 00:20:19.489534 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 00:20:19.490418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:20:19.491835 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 00:20:19.493613 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 00:20:19.494530 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:20:19.498228 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 00:20:19.499567 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:20:19.502401 systemd-journald[1109]: Time spent on flushing to /var/log/journal/f1391b7fb4b54937bb2408ebba90f37e is 20.894ms for 852 entries. Sep 10 00:20:19.502401 systemd-journald[1109]: System Journal (/var/log/journal/f1391b7fb4b54937bb2408ebba90f37e) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:20:19.540917 systemd-journald[1109]: Received client request to flush runtime journal. Sep 10 00:20:19.540964 kernel: loop0: detected capacity change from 0 to 114328 Sep 10 00:20:19.540989 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:20:19.503224 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:20:19.505839 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 00:20:19.510263 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:20:19.512768 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:20:19.514008 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 00:20:19.515231 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 00:20:19.516380 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 00:20:19.519083 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 00:20:19.528920 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 00:20:19.533299 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 10 00:20:19.541116 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 00:20:19.545332 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 00:20:19.548920 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:20:19.553448 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 10 00:20:19.555955 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Sep 10 00:20:19.555973 systemd-tmpfiles[1154]: ACLs are not supported, ignoring. Sep 10 00:20:19.562302 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:20:19.574098 kernel: loop1: detected capacity change from 0 to 114432 Sep 10 00:20:19.577170 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 00:20:19.578525 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:20:19.579242 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 10 00:20:19.599694 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 00:20:19.604124 kernel: loop2: detected capacity change from 0 to 207008 Sep 10 00:20:19.606321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:20:19.621177 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Sep 10 00:20:19.621194 systemd-tmpfiles[1178]: ACLs are not supported, ignoring. Sep 10 00:20:19.625107 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:20:19.634144 kernel: loop3: detected capacity change from 0 to 114328 Sep 10 00:20:19.640092 kernel: loop4: detected capacity change from 0 to 114432 Sep 10 00:20:19.645099 kernel: loop5: detected capacity change from 0 to 207008 Sep 10 00:20:19.648941 (sd-merge)[1182]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 00:20:19.649346 (sd-merge)[1182]: Merged extensions into '/usr'. Sep 10 00:20:19.656314 systemd[1]: Reloading requested from client PID 1153 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 00:20:19.656334 systemd[1]: Reloading... Sep 10 00:20:19.718354 zram_generator::config[1206]: No configuration found. Sep 10 00:20:19.753315 ldconfig[1148]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:20:19.810826 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:20:19.846419 systemd[1]: Reloading finished in 189 ms. Sep 10 00:20:19.879615 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 00:20:19.881551 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 00:20:19.892481 systemd[1]: Starting ensure-sysext.service... Sep 10 00:20:19.894245 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:20:19.898649 systemd[1]: Reloading requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Sep 10 00:20:19.898661 systemd[1]: Reloading... Sep 10 00:20:19.910346 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:20:19.910596 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 00:20:19.911249 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:20:19.911460 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Sep 10 00:20:19.911515 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Sep 10 00:20:19.913628 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:20:19.913641 systemd-tmpfiles[1243]: Skipping /boot Sep 10 00:20:19.921008 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:20:19.921025 systemd-tmpfiles[1243]: Skipping /boot Sep 10 00:20:19.955080 zram_generator::config[1270]: No configuration found. Sep 10 00:20:20.034720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:20:20.070242 systemd[1]: Reloading finished in 171 ms. Sep 10 00:20:20.096114 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 00:20:20.104414 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:20:20.111687 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:20:20.113953 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 00:20:20.115950 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 00:20:20.120328 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:20:20.125293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:20:20.128337 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 00:20:20.132335 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:20:20.134616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:20:20.136572 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:20:20.140398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:20:20.142797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:20:20.144493 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 00:20:20.146156 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 00:20:20.147660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:20:20.147789 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:20:20.150275 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:20:20.150392 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:20:20.153429 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:20:20.153550 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:20:20.160169 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:20:20.166438 augenrules[1336]: No rules Sep 10 00:20:20.167384 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:20:20.170363 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:20:20.171138 systemd-udevd[1317]: Using default interface naming scheme 'v255'. Sep 10 00:20:20.174814 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:20:20.175870 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:20:20.179281 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 00:20:20.181684 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 00:20:20.183537 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:20:20.187092 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 00:20:20.188546 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:20:20.190886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:20:20.191042 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:20:20.192431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:20:20.194090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:20:20.195670 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:20:20.195786 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:20:20.198176 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 00:20:20.205367 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 00:20:20.220060 systemd[1]: Finished ensure-sysext.service. Sep 10 00:20:20.222856 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 00:20:20.223253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:20:20.233287 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:20:20.236300 systemd-resolved[1310]: Positive Trust Anchors: Sep 10 00:20:20.236313 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:20:20.236344 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:20:20.240275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:20:20.242858 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:20:20.243275 systemd-resolved[1310]: Defaulting to hostname 'linux'. Sep 10 00:20:20.244816 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:20:20.245717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:20:20.247119 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:20:20.252213 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 00:20:20.254659 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:20:20.254958 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:20:20.256890 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:20:20.257065 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:20:20.258299 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:20:20.260206 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:20:20.261371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:20:20.261501 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:20:20.263114 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:20:20.263252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:20:20.274097 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1362) Sep 10 00:20:20.275887 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:20:20.277592 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:20:20.277690 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:20:20.311466 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 00:20:20.312763 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 00:20:20.319710 systemd-networkd[1381]: lo: Link UP Sep 10 00:20:20.319716 systemd-networkd[1381]: lo: Gained carrier Sep 10 00:20:20.321140 systemd-networkd[1381]: Enumeration completed Sep 10 00:20:20.321233 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:20:20.321823 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:20:20.321834 systemd-networkd[1381]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:20:20.322086 systemd[1]: Reached target network.target - Network. Sep 10 00:20:20.322757 systemd-networkd[1381]: eth0: Link UP Sep 10 00:20:20.322765 systemd-networkd[1381]: eth0: Gained carrier Sep 10 00:20:20.322779 systemd-networkd[1381]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:20:20.329223 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 00:20:20.331506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:20:20.332135 systemd-networkd[1381]: eth0: DHCPv4 address 10.0.0.141/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:20:20.333130 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Sep 10 00:20:20.334278 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 00:20:20.335231 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:20:20.335272 systemd-timesyncd[1382]: Initial clock synchronization to Wed 2025-09-10 00:20:20.045182 UTC. Sep 10 00:20:20.350473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:20:20.353108 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 00:20:20.369388 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 00:20:20.380259 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 00:20:20.389804 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:20:20.390533 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:20:20.421451 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 00:20:20.422611 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:20:20.423515 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:20:20.424385 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 00:20:20.425316 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 00:20:20.426371 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 00:20:20.427348 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 00:20:20.428347 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 00:20:20.429331 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:20:20.429362 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:20:20.430061 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:20:20.431599 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 00:20:20.433774 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 00:20:20.440902 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 00:20:20.442895 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 00:20:20.444248 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 00:20:20.445151 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:20:20.445839 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:20:20.446587 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:20:20.446619 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:20:20.447520 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 00:20:20.449215 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 00:20:20.452175 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:20:20.451191 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 00:20:20.455196 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 00:20:20.456487 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 00:20:20.458707 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 00:20:20.461748 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 00:20:20.463487 jq[1414]: false Sep 10 00:20:20.463885 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 00:20:20.465918 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 00:20:20.469361 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 00:20:20.471660 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:20:20.472739 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:20:20.473987 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 00:20:20.476216 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 00:20:20.477740 dbus-daemon[1413]: [system] SELinux support is enabled Sep 10 00:20:20.484729 extend-filesystems[1415]: Found loop3 Sep 10 00:20:20.484729 extend-filesystems[1415]: Found loop4 Sep 10 00:20:20.484729 extend-filesystems[1415]: Found loop5 Sep 10 00:20:20.484729 extend-filesystems[1415]: Found vda Sep 10 00:20:20.484729 extend-filesystems[1415]: Found vda1 Sep 10 00:20:20.484729 extend-filesystems[1415]: Found vda2 Sep 10 00:20:20.484729 extend-filesystems[1415]: Found vda3 Sep 10 00:20:20.484729 extend-filesystems[1415]: Found usr Sep 10 00:20:20.484729 extend-filesystems[1415]: Found vda4 Sep 10 00:20:20.484729 extend-filesystems[1415]: Found vda6 Sep 10 00:20:20.484729 extend-filesystems[1415]: Found vda7 Sep 10 00:20:20.484729 extend-filesystems[1415]: Found vda9 Sep 10 00:20:20.484729 extend-filesystems[1415]: Checking size of /dev/vda9 Sep 10 00:20:20.527616 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:20:20.478366 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 00:20:20.527715 extend-filesystems[1415]: Resized partition /dev/vda9 Sep 10 00:20:20.530032 update_engine[1425]: I20250910 00:20:20.509459 1425 main.cc:92] Flatcar Update Engine starting Sep 10 00:20:20.530032 update_engine[1425]: I20250910 00:20:20.519563 1425 update_check_scheduler.cc:74] Next update check in 4m51s Sep 10 00:20:20.483084 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 00:20:20.531841 extend-filesystems[1447]: resize2fs 1.47.1 (20-May-2024) Sep 10 00:20:20.485748 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:20:20.538832 jq[1428]: true Sep 10 00:20:20.485887 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 00:20:20.487954 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:20:20.539637 tar[1433]: linux-arm64/LICENSE Sep 10 00:20:20.539637 tar[1433]: linux-arm64/helm Sep 10 00:20:20.488892 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 00:20:20.498395 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:20:20.542262 jq[1435]: true Sep 10 00:20:20.498431 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 00:20:20.499550 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:20:20.578582 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:20:20.578629 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1366) Sep 10 00:20:20.499565 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 00:20:20.500019 (ntainerd)[1438]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 00:20:20.502431 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:20:20.502599 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 00:20:20.516022 systemd[1]: Started update-engine.service - Update Engine. Sep 10 00:20:20.526902 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 00:20:20.527146 systemd-logind[1422]: New seat seat0. Sep 10 00:20:20.530194 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 00:20:20.532273 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 00:20:20.584082 extend-filesystems[1447]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:20:20.584082 extend-filesystems[1447]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:20:20.584082 extend-filesystems[1447]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:20:20.582415 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:20:20.588291 extend-filesystems[1415]: Resized filesystem in /dev/vda9 Sep 10 00:20:20.582611 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 00:20:20.589741 bash[1467]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:20:20.590280 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 00:20:20.591857 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 00:20:20.606397 locksmithd[1448]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:20:20.663548 containerd[1438]: time="2025-09-10T00:20:20.663466800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 10 00:20:20.690127 containerd[1438]: time="2025-09-10T00:20:20.690075320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:20:20.691448 containerd[1438]: time="2025-09-10T00:20:20.691411520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:20:20.692026 containerd[1438]: time="2025-09-10T00:20:20.691696400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:20:20.692026 containerd[1438]: time="2025-09-10T00:20:20.691728440Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:20:20.692026 containerd[1438]: time="2025-09-10T00:20:20.691866600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 00:20:20.692026 containerd[1438]: time="2025-09-10T00:20:20.691882800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 00:20:20.692026 containerd[1438]: time="2025-09-10T00:20:20.691933080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:20:20.692026 containerd[1438]: time="2025-09-10T00:20:20.691944480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:20:20.692504 containerd[1438]: time="2025-09-10T00:20:20.692473400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:20:20.692947 containerd[1438]: time="2025-09-10T00:20:20.692600240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:20:20.692947 containerd[1438]: time="2025-09-10T00:20:20.692628160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:20:20.692947 containerd[1438]: time="2025-09-10T00:20:20.692638640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:20:20.692947 containerd[1438]: time="2025-09-10T00:20:20.692731680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:20:20.692947 containerd[1438]: time="2025-09-10T00:20:20.692913720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:20:20.693406 containerd[1438]: time="2025-09-10T00:20:20.693382680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:20:20.693527 containerd[1438]: time="2025-09-10T00:20:20.693511120Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:20:20.693715 containerd[1438]: time="2025-09-10T00:20:20.693697040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:20:20.693927 containerd[1438]: time="2025-09-10T00:20:20.693861600Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:20:20.696960 containerd[1438]: time="2025-09-10T00:20:20.696855920Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:20:20.697198 containerd[1438]: time="2025-09-10T00:20:20.697177600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:20:20.697461 containerd[1438]: time="2025-09-10T00:20:20.697266600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 00:20:20.697461 containerd[1438]: time="2025-09-10T00:20:20.697286000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 00:20:20.697461 containerd[1438]: time="2025-09-10T00:20:20.697300440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:20:20.697461 containerd[1438]: time="2025-09-10T00:20:20.697419200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:20:20.698020 containerd[1438]: time="2025-09-10T00:20:20.697994000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698257200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698280280Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698292880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698307040Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698319840Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698331480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698345080Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698358800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698370280Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698382280Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698393240Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698416080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698428720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.698819 containerd[1438]: time="2025-09-10T00:20:20.698440240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698451960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698463240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698476960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698489280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698501400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698516480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698529880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698545000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698555840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698567680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698583160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698602120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698613800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.699127 containerd[1438]: time="2025-09-10T00:20:20.698624400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:20:20.700455 containerd[1438]: time="2025-09-10T00:20:20.700268360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:20:20.700455 containerd[1438]: time="2025-09-10T00:20:20.700354920Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 00:20:20.700455 containerd[1438]: time="2025-09-10T00:20:20.700368440Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:20:20.700455 containerd[1438]: time="2025-09-10T00:20:20.700426040Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 00:20:20.700455 containerd[1438]: time="2025-09-10T00:20:20.700440760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.700455 containerd[1438]: time="2025-09-10T00:20:20.700453520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 00:20:20.700455 containerd[1438]: time="2025-09-10T00:20:20.700463160Z" level=info msg="NRI interface is disabled by configuration." Sep 10 00:20:20.700630 containerd[1438]: time="2025-09-10T00:20:20.700473840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:20:20.701030 containerd[1438]: time="2025-09-10T00:20:20.700939240Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:20:20.701030 containerd[1438]: time="2025-09-10T00:20:20.701020000Z" level=info msg="Connect containerd service" Sep 10 00:20:20.701196 containerd[1438]: time="2025-09-10T00:20:20.701171440Z" level=info msg="using legacy CRI server" Sep 10 00:20:20.701196 containerd[1438]: time="2025-09-10T00:20:20.701194120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 00:20:20.701293 containerd[1438]: time="2025-09-10T00:20:20.701278960Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:20:20.702212 containerd[1438]: time="2025-09-10T00:20:20.702180160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:20:20.702562 containerd[1438]: time="2025-09-10T00:20:20.702527600Z" level=info msg="Start subscribing containerd event" Sep 10 00:20:20.702591 containerd[1438]: time="2025-09-10T00:20:20.702578920Z" level=info msg="Start recovering state" Sep 10 00:20:20.702906 containerd[1438]: time="2025-09-10T00:20:20.702680480Z" level=info msg="Start event monitor" Sep 10 00:20:20.702906 containerd[1438]: time="2025-09-10T00:20:20.702692960Z" level=info msg="Start snapshots syncer" Sep 10 00:20:20.702906 containerd[1438]: time="2025-09-10T00:20:20.702701800Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:20:20.702906 containerd[1438]: time="2025-09-10T00:20:20.702709480Z" level=info msg="Start streaming server" Sep 10 00:20:20.703549 containerd[1438]: time="2025-09-10T00:20:20.703528280Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:20:20.703602 containerd[1438]: time="2025-09-10T00:20:20.703588440Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:20:20.703714 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 00:20:20.705286 containerd[1438]: time="2025-09-10T00:20:20.705200560Z" level=info msg="containerd successfully booted in 0.042627s" Sep 10 00:20:20.909238 tar[1433]: linux-arm64/README.md Sep 10 00:20:20.919328 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 00:20:20.935201 sshd_keygen[1440]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:20:20.953388 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 00:20:20.962321 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 00:20:20.967416 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:20:20.967609 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 00:20:20.969996 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 00:20:20.980432 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 00:20:20.991313 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 00:20:20.993115 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 00:20:20.994095 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 00:20:21.672166 systemd-networkd[1381]: eth0: Gained IPv6LL Sep 10 00:20:21.674531 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 00:20:21.676146 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 00:20:21.686288 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 00:20:21.688364 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:20:21.690127 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 00:20:21.703974 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 00:20:21.704240 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 00:20:21.705632 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 00:20:21.708288 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 00:20:22.205747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:20:22.207011 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 00:20:22.208212 systemd[1]: Startup finished in 520ms (kernel) + 5.264s (initrd) + 3.294s (userspace) = 9.079s. Sep 10 00:20:22.209339 (kubelet)[1525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:20:22.526292 kubelet[1525]: E0910 00:20:22.526184 1525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:20:22.528735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:20:22.528887 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:20:25.864793 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 00:20:25.865894 systemd[1]: Started sshd@0-10.0.0.141:22-10.0.0.1:38056.service - OpenSSH per-connection server daemon (10.0.0.1:38056). Sep 10 00:20:25.924451 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 38056 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:20:25.926510 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:20:25.934964 systemd-logind[1422]: New session 1 of user core. Sep 10 00:20:25.935930 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 00:20:25.942322 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 00:20:25.950921 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 00:20:25.953311 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 00:20:25.958856 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:20:26.028993 systemd[1543]: Queued start job for default target default.target. Sep 10 00:20:26.039927 systemd[1543]: Created slice app.slice - User Application Slice. Sep 10 00:20:26.039956 systemd[1543]: Reached target paths.target - Paths. Sep 10 00:20:26.039968 systemd[1543]: Reached target timers.target - Timers. Sep 10 00:20:26.041139 systemd[1543]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 00:20:26.049878 systemd[1543]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 00:20:26.049949 systemd[1543]: Reached target sockets.target - Sockets. Sep 10 00:20:26.049961 systemd[1543]: Reached target basic.target - Basic System. Sep 10 00:20:26.049996 systemd[1543]: Reached target default.target - Main User Target. Sep 10 00:20:26.050019 systemd[1543]: Startup finished in 86ms. Sep 10 00:20:26.050641 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 00:20:26.051731 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 00:20:26.109011 systemd[1]: Started sshd@1-10.0.0.141:22-10.0.0.1:38064.service - OpenSSH per-connection server daemon (10.0.0.1:38064). Sep 10 00:20:26.144992 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 38064 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:20:26.146142 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:20:26.150344 systemd-logind[1422]: New session 2 of user core. Sep 10 00:20:26.159196 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 00:20:26.208637 sshd[1554]: pam_unix(sshd:session): session closed for user core Sep 10 00:20:26.223133 systemd[1]: sshd@1-10.0.0.141:22-10.0.0.1:38064.service: Deactivated successfully. Sep 10 00:20:26.224320 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:20:26.226100 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:20:26.227073 systemd[1]: Started sshd@2-10.0.0.141:22-10.0.0.1:38072.service - OpenSSH per-connection server daemon (10.0.0.1:38072). Sep 10 00:20:26.227744 systemd-logind[1422]: Removed session 2. Sep 10 00:20:26.262863 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 38072 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:20:26.263915 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:20:26.267119 systemd-logind[1422]: New session 3 of user core. Sep 10 00:20:26.277224 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 00:20:26.323184 sshd[1561]: pam_unix(sshd:session): session closed for user core Sep 10 00:20:26.333172 systemd[1]: sshd@2-10.0.0.141:22-10.0.0.1:38072.service: Deactivated successfully. Sep 10 00:20:26.334460 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:20:26.335541 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:20:26.336494 systemd[1]: Started sshd@3-10.0.0.141:22-10.0.0.1:38082.service - OpenSSH per-connection server daemon (10.0.0.1:38082). Sep 10 00:20:26.337159 systemd-logind[1422]: Removed session 3. Sep 10 00:20:26.372141 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 38082 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:20:26.373222 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:20:26.376622 systemd-logind[1422]: New session 4 of user core. Sep 10 00:20:26.390239 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 00:20:26.439593 sshd[1568]: pam_unix(sshd:session): session closed for user core Sep 10 00:20:26.453106 systemd[1]: sshd@3-10.0.0.141:22-10.0.0.1:38082.service: Deactivated successfully. Sep 10 00:20:26.454443 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:20:26.455590 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:20:26.456578 systemd[1]: Started sshd@4-10.0.0.141:22-10.0.0.1:38098.service - OpenSSH per-connection server daemon (10.0.0.1:38098). Sep 10 00:20:26.457248 systemd-logind[1422]: Removed session 4. Sep 10 00:20:26.492163 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 38098 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:20:26.493229 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:20:26.496627 systemd-logind[1422]: New session 5 of user core. Sep 10 00:20:26.510181 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 00:20:26.565690 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:20:26.565948 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:20:26.583714 sudo[1578]: pam_unix(sudo:session): session closed for user root Sep 10 00:20:26.585221 sshd[1575]: pam_unix(sshd:session): session closed for user core Sep 10 00:20:26.603204 systemd[1]: sshd@4-10.0.0.141:22-10.0.0.1:38098.service: Deactivated successfully. Sep 10 00:20:26.604631 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:20:26.605770 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:20:26.613416 systemd[1]: Started sshd@5-10.0.0.141:22-10.0.0.1:38110.service - OpenSSH per-connection server daemon (10.0.0.1:38110). Sep 10 00:20:26.615278 systemd-logind[1422]: Removed session 5. Sep 10 00:20:26.646912 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 38110 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:20:26.647997 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:20:26.651732 systemd-logind[1422]: New session 6 of user core. Sep 10 00:20:26.666205 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 00:20:26.715882 sudo[1587]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:20:26.716187 sudo[1587]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:20:26.719374 sudo[1587]: pam_unix(sudo:session): session closed for user root Sep 10 00:20:26.723602 sudo[1586]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 10 00:20:26.724101 sudo[1586]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:20:26.743278 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 10 00:20:26.744349 auditctl[1590]: No rules Sep 10 00:20:26.745147 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:20:26.745351 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 10 00:20:26.746819 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:20:26.768732 augenrules[1608]: No rules Sep 10 00:20:26.769830 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:20:26.771084 sudo[1586]: pam_unix(sudo:session): session closed for user root Sep 10 00:20:26.772432 sshd[1583]: pam_unix(sshd:session): session closed for user core Sep 10 00:20:26.787208 systemd[1]: sshd@5-10.0.0.141:22-10.0.0.1:38110.service: Deactivated successfully. Sep 10 00:20:26.788446 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:20:26.789569 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:20:26.790504 systemd[1]: Started sshd@6-10.0.0.141:22-10.0.0.1:38124.service - OpenSSH per-connection server daemon (10.0.0.1:38124). Sep 10 00:20:26.791181 systemd-logind[1422]: Removed session 6. Sep 10 00:20:26.826519 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 38124 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:20:26.827572 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:20:26.831013 systemd-logind[1422]: New session 7 of user core. Sep 10 00:20:26.841169 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 00:20:26.889911 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:20:26.890220 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:20:27.148363 (dockerd)[1637]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 00:20:27.148370 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 00:20:27.354186 dockerd[1637]: time="2025-09-10T00:20:27.354128993Z" level=info msg="Starting up" Sep 10 00:20:27.519338 dockerd[1637]: time="2025-09-10T00:20:27.519239954Z" level=info msg="Loading containers: start." Sep 10 00:20:27.602074 kernel: Initializing XFRM netlink socket Sep 10 00:20:27.668486 systemd-networkd[1381]: docker0: Link UP Sep 10 00:20:27.693240 dockerd[1637]: time="2025-09-10T00:20:27.693193568Z" level=info msg="Loading containers: done." Sep 10 00:20:27.703843 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3837103570-merged.mount: Deactivated successfully. Sep 10 00:20:27.706471 dockerd[1637]: time="2025-09-10T00:20:27.706427879Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:20:27.706553 dockerd[1637]: time="2025-09-10T00:20:27.706518264Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 10 00:20:27.706626 dockerd[1637]: time="2025-09-10T00:20:27.706610184Z" level=info msg="Daemon has completed initialization" Sep 10 00:20:27.730947 dockerd[1637]: time="2025-09-10T00:20:27.730778012Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:20:27.731074 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:20:28.222234 containerd[1438]: time="2025-09-10T00:20:28.222192687Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 10 00:20:28.954681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3765716944.mount: Deactivated successfully. Sep 10 00:20:30.513314 containerd[1438]: time="2025-09-10T00:20:30.513257978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:30.514793 containerd[1438]: time="2025-09-10T00:20:30.514760780Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 10 00:20:30.515729 containerd[1438]: time="2025-09-10T00:20:30.515683414Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:30.519974 containerd[1438]: time="2025-09-10T00:20:30.518704686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:30.519974 containerd[1438]: time="2025-09-10T00:20:30.519749336Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.297515136s" Sep 10 00:20:30.519974 containerd[1438]: time="2025-09-10T00:20:30.519781145Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 10 00:20:30.520932 containerd[1438]: time="2025-09-10T00:20:30.520910898Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 10 00:20:31.834811 containerd[1438]: time="2025-09-10T00:20:31.834748544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:31.835280 containerd[1438]: time="2025-09-10T00:20:31.835253721Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 10 00:20:31.836145 containerd[1438]: time="2025-09-10T00:20:31.836098270Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:31.838996 containerd[1438]: time="2025-09-10T00:20:31.838965347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:31.840219 containerd[1438]: time="2025-09-10T00:20:31.840195543Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.319252189s" Sep 10 00:20:31.840263 containerd[1438]: time="2025-09-10T00:20:31.840225772Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 10 00:20:31.840656 containerd[1438]: time="2025-09-10T00:20:31.840614113Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 10 00:20:32.613283 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:20:32.622509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:20:32.727369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:20:32.731260 (kubelet)[1857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:20:32.824224 kubelet[1857]: E0910 00:20:32.824173 1857 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:20:32.828177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:20:32.828317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:20:33.179114 containerd[1438]: time="2025-09-10T00:20:33.179067940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:33.180101 containerd[1438]: time="2025-09-10T00:20:33.179849115Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 10 00:20:33.180856 containerd[1438]: time="2025-09-10T00:20:33.180816521Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:33.184066 containerd[1438]: time="2025-09-10T00:20:33.183693006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:33.185139 containerd[1438]: time="2025-09-10T00:20:33.184919505Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.344265039s" Sep 10 00:20:33.185139 containerd[1438]: time="2025-09-10T00:20:33.184947857Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 10 00:20:33.185365 containerd[1438]: time="2025-09-10T00:20:33.185338980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 10 00:20:34.239391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2418763309.mount: Deactivated successfully. Sep 10 00:20:34.463380 containerd[1438]: time="2025-09-10T00:20:34.463332963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:34.463898 containerd[1438]: time="2025-09-10T00:20:34.463862836Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 10 00:20:34.464503 containerd[1438]: time="2025-09-10T00:20:34.464474861Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:34.467805 containerd[1438]: time="2025-09-10T00:20:34.466973517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:34.468778 containerd[1438]: time="2025-09-10T00:20:34.468727520Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.283353784s" Sep 10 00:20:34.468778 containerd[1438]: time="2025-09-10T00:20:34.468774657Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 10 00:20:34.469246 containerd[1438]: time="2025-09-10T00:20:34.469205249Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:20:35.011576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2854253461.mount: Deactivated successfully. Sep 10 00:20:35.745370 containerd[1438]: time="2025-09-10T00:20:35.745290846Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:35.746157 containerd[1438]: time="2025-09-10T00:20:35.746127861Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 10 00:20:35.746494 containerd[1438]: time="2025-09-10T00:20:35.746466319Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:35.751501 containerd[1438]: time="2025-09-10T00:20:35.750249374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:35.754134 containerd[1438]: time="2025-09-10T00:20:35.751466015Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.282237151s" Sep 10 00:20:35.754222 containerd[1438]: time="2025-09-10T00:20:35.754139189Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 00:20:35.754847 containerd[1438]: time="2025-09-10T00:20:35.754815507Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:20:36.188382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1837437372.mount: Deactivated successfully. Sep 10 00:20:36.193860 containerd[1438]: time="2025-09-10T00:20:36.193816690Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:36.195554 containerd[1438]: time="2025-09-10T00:20:36.195505708Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 00:20:36.197113 containerd[1438]: time="2025-09-10T00:20:36.196528919Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:36.198599 containerd[1438]: time="2025-09-10T00:20:36.198551618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:36.199562 containerd[1438]: time="2025-09-10T00:20:36.199400486Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 444.54802ms" Sep 10 00:20:36.199562 containerd[1438]: time="2025-09-10T00:20:36.199432648Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 00:20:36.200260 containerd[1438]: time="2025-09-10T00:20:36.200090454Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 10 00:20:36.711116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2433374169.mount: Deactivated successfully. Sep 10 00:20:38.248297 containerd[1438]: time="2025-09-10T00:20:38.248240494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:38.249547 containerd[1438]: time="2025-09-10T00:20:38.249489320Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 10 00:20:38.251089 containerd[1438]: time="2025-09-10T00:20:38.250254205Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:38.253488 containerd[1438]: time="2025-09-10T00:20:38.253434212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:38.255910 containerd[1438]: time="2025-09-10T00:20:38.255880099Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.055760061s" Sep 10 00:20:38.255989 containerd[1438]: time="2025-09-10T00:20:38.255913334Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 10 00:20:42.863596 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:20:42.873438 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:20:42.985708 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 00:20:42.985800 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 00:20:42.986092 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:20:42.998287 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:20:43.017140 systemd[1]: Reloading requested from client PID 2020 ('systemctl') (unit session-7.scope)... Sep 10 00:20:43.017161 systemd[1]: Reloading... Sep 10 00:20:43.079167 zram_generator::config[2056]: No configuration found. Sep 10 00:20:43.217912 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:20:43.271911 systemd[1]: Reloading finished in 254 ms. Sep 10 00:20:43.311662 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 00:20:43.311727 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 00:20:43.311944 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:20:43.314257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:20:43.412385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:20:43.417016 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:20:43.454084 kubelet[2105]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:20:43.454084 kubelet[2105]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:20:43.454084 kubelet[2105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:20:43.454084 kubelet[2105]: I0910 00:20:43.453948 2105 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:20:43.894307 kubelet[2105]: I0910 00:20:43.894257 2105 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 00:20:43.894307 kubelet[2105]: I0910 00:20:43.894286 2105 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:20:43.894578 kubelet[2105]: I0910 00:20:43.894562 2105 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 00:20:43.913770 kubelet[2105]: E0910 00:20:43.913709 2105 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.141:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:20:43.915680 kubelet[2105]: I0910 00:20:43.915527 2105 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:20:43.919909 kubelet[2105]: E0910 00:20:43.919854 2105 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:20:43.919909 kubelet[2105]: I0910 00:20:43.919901 2105 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:20:43.922389 kubelet[2105]: I0910 00:20:43.922358 2105 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:20:43.923139 kubelet[2105]: I0910 00:20:43.923083 2105 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:20:43.923321 kubelet[2105]: I0910 00:20:43.923132 2105 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:20:43.923419 kubelet[2105]: I0910 00:20:43.923372 2105 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:20:43.923419 kubelet[2105]: I0910 00:20:43.923381 2105 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 00:20:43.923586 kubelet[2105]: I0910 00:20:43.923557 2105 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:20:43.926994 kubelet[2105]: I0910 00:20:43.926959 2105 kubelet.go:446] "Attempting to sync node with API server" Sep 10 00:20:43.927036 kubelet[2105]: I0910 00:20:43.926995 2105 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:20:43.927036 kubelet[2105]: I0910 00:20:43.927022 2105 kubelet.go:352] "Adding apiserver pod source" Sep 10 00:20:43.927036 kubelet[2105]: I0910 00:20:43.927032 2105 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:20:43.931946 kubelet[2105]: W0910 00:20:43.930907 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Sep 10 00:20:43.931946 kubelet[2105]: E0910 00:20:43.930992 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:20:43.931946 kubelet[2105]: W0910 00:20:43.931040 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Sep 10 00:20:43.931946 kubelet[2105]: E0910 00:20:43.931086 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:20:43.931946 kubelet[2105]: I0910 00:20:43.931383 2105 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:20:43.932315 kubelet[2105]: I0910 00:20:43.932289 2105 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:20:43.932431 kubelet[2105]: W0910 00:20:43.932417 2105 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:20:43.933499 kubelet[2105]: I0910 00:20:43.933481 2105 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:20:43.933564 kubelet[2105]: I0910 00:20:43.933522 2105 server.go:1287] "Started kubelet" Sep 10 00:20:43.937803 kubelet[2105]: E0910 00:20:43.937562 2105 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.141:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.141:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c3df6a51dc21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:20:43.933498401 +0000 UTC m=+0.513102611,LastTimestamp:2025-09-10 00:20:43.933498401 +0000 UTC m=+0.513102611,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:20:43.937907 kubelet[2105]: I0910 00:20:43.937883 2105 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:20:43.938263 kubelet[2105]: E0910 00:20:43.938233 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:43.938328 kubelet[2105]: I0910 00:20:43.938271 2105 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:20:43.938869 kubelet[2105]: I0910 00:20:43.938431 2105 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:20:43.938869 kubelet[2105]: I0910 00:20:43.938507 2105 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:20:43.938945 kubelet[2105]: W0910 00:20:43.938916 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Sep 10 00:20:43.939090 kubelet[2105]: E0910 00:20:43.939072 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:20:43.939808 kubelet[2105]: I0910 00:20:43.939278 2105 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:20:43.939808 kubelet[2105]: I0910 00:20:43.939490 2105 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:20:43.939808 kubelet[2105]: I0910 00:20:43.939544 2105 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:20:43.939808 kubelet[2105]: I0910 00:20:43.938456 2105 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:20:43.940183 kubelet[2105]: I0910 00:20:43.940156 2105 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:20:43.941180 kubelet[2105]: I0910 00:20:43.940450 2105 server.go:479] "Adding debug handlers to kubelet server" Sep 10 00:20:43.941180 kubelet[2105]: E0910 00:20:43.941114 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="200ms" Sep 10 00:20:43.941798 kubelet[2105]: I0910 00:20:43.941783 2105 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:20:43.941874 kubelet[2105]: I0910 00:20:43.941865 2105 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:20:43.943720 kubelet[2105]: E0910 00:20:43.943685 2105 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:20:43.952592 kubelet[2105]: I0910 00:20:43.952537 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:20:43.954159 kubelet[2105]: I0910 00:20:43.954134 2105 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:20:43.954159 kubelet[2105]: I0910 00:20:43.954158 2105 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 00:20:43.954250 kubelet[2105]: I0910 00:20:43.954176 2105 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:20:43.954250 kubelet[2105]: I0910 00:20:43.954183 2105 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 00:20:43.954250 kubelet[2105]: E0910 00:20:43.954222 2105 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:20:43.956828 kubelet[2105]: W0910 00:20:43.956743 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Sep 10 00:20:43.956828 kubelet[2105]: E0910 00:20:43.956777 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:20:43.957152 kubelet[2105]: I0910 00:20:43.957104 2105 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:20:43.957152 kubelet[2105]: I0910 00:20:43.957118 2105 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:20:43.957152 kubelet[2105]: I0910 00:20:43.957133 2105 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:20:44.031438 kubelet[2105]: I0910 00:20:44.031395 2105 policy_none.go:49] "None policy: Start" Sep 10 00:20:44.031438 kubelet[2105]: I0910 00:20:44.031431 2105 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:20:44.031438 kubelet[2105]: I0910 00:20:44.031445 2105 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:20:44.036531 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 00:20:44.038583 kubelet[2105]: E0910 00:20:44.038558 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:44.050411 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 00:20:44.052979 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 00:20:44.055117 kubelet[2105]: E0910 00:20:44.055086 2105 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:20:44.062827 kubelet[2105]: I0910 00:20:44.062678 2105 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:20:44.062921 kubelet[2105]: I0910 00:20:44.062883 2105 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:20:44.062921 kubelet[2105]: I0910 00:20:44.062895 2105 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:20:44.063102 kubelet[2105]: I0910 00:20:44.063086 2105 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:20:44.064279 kubelet[2105]: E0910 00:20:44.064260 2105 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:20:44.064673 kubelet[2105]: E0910 00:20:44.064306 2105 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:20:44.142471 kubelet[2105]: E0910 00:20:44.142414 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="400ms" Sep 10 00:20:44.164523 kubelet[2105]: I0910 00:20:44.164441 2105 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:20:44.164929 kubelet[2105]: E0910 00:20:44.164887 2105 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Sep 10 00:20:44.262592 systemd[1]: Created slice kubepods-burstable-podb975c915421df1193ed39dacf111e57a.slice - libcontainer container kubepods-burstable-podb975c915421df1193ed39dacf111e57a.slice. Sep 10 00:20:44.292225 kubelet[2105]: E0910 00:20:44.292186 2105 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:20:44.294709 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 10 00:20:44.306074 kubelet[2105]: E0910 00:20:44.305941 2105 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:20:44.308091 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 10 00:20:44.311633 kubelet[2105]: E0910 00:20:44.311391 2105 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:20:44.366732 kubelet[2105]: I0910 00:20:44.366712 2105 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:20:44.367151 kubelet[2105]: E0910 00:20:44.367117 2105 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Sep 10 00:20:44.440442 kubelet[2105]: I0910 00:20:44.440350 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:44.440442 kubelet[2105]: I0910 00:20:44.440385 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:44.440530 kubelet[2105]: I0910 00:20:44.440441 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b975c915421df1193ed39dacf111e57a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b975c915421df1193ed39dacf111e57a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:44.440530 kubelet[2105]: I0910 00:20:44.440484 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b975c915421df1193ed39dacf111e57a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b975c915421df1193ed39dacf111e57a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:44.440530 kubelet[2105]: I0910 00:20:44.440502 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:44.440530 kubelet[2105]: I0910 00:20:44.440521 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:44.440610 kubelet[2105]: I0910 00:20:44.440537 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:44.440610 kubelet[2105]: I0910 00:20:44.440552 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:20:44.440610 kubelet[2105]: I0910 00:20:44.440588 2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b975c915421df1193ed39dacf111e57a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b975c915421df1193ed39dacf111e57a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:44.543193 kubelet[2105]: E0910 00:20:44.543151 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="800ms" Sep 10 00:20:44.593381 kubelet[2105]: E0910 00:20:44.593344 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:44.594006 containerd[1438]: time="2025-09-10T00:20:44.593955940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b975c915421df1193ed39dacf111e57a,Namespace:kube-system,Attempt:0,}" Sep 10 00:20:44.606567 kubelet[2105]: E0910 00:20:44.606534 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:44.607088 containerd[1438]: time="2025-09-10T00:20:44.607018790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 10 00:20:44.612288 kubelet[2105]: E0910 00:20:44.612252 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:44.612589 containerd[1438]: time="2025-09-10T00:20:44.612554392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 10 00:20:44.768987 kubelet[2105]: I0910 00:20:44.768908 2105 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:20:44.769273 kubelet[2105]: E0910 00:20:44.769235 2105 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.141:6443/api/v1/nodes\": dial tcp 10.0.0.141:6443: connect: connection refused" node="localhost" Sep 10 00:20:44.852750 kubelet[2105]: E0910 00:20:44.852644 2105 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.141:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.141:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c3df6a51dc21 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:20:43.933498401 +0000 UTC m=+0.513102611,LastTimestamp:2025-09-10 00:20:43.933498401 +0000 UTC m=+0.513102611,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:20:44.987804 kubelet[2105]: W0910 00:20:44.987705 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Sep 10 00:20:44.987804 kubelet[2105]: E0910 00:20:44.987766 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.141:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:20:45.076178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount780942927.mount: Deactivated successfully. Sep 10 00:20:45.082367 containerd[1438]: time="2025-09-10T00:20:45.082325293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:20:45.083749 containerd[1438]: time="2025-09-10T00:20:45.083716444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 10 00:20:45.085473 containerd[1438]: time="2025-09-10T00:20:45.085436071Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:20:45.087211 containerd[1438]: time="2025-09-10T00:20:45.087078812Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:20:45.087541 containerd[1438]: time="2025-09-10T00:20:45.087353767Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:20:45.087859 containerd[1438]: time="2025-09-10T00:20:45.087831224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:20:45.088912 containerd[1438]: time="2025-09-10T00:20:45.088194289Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:20:45.090111 containerd[1438]: time="2025-09-10T00:20:45.090075279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:20:45.092723 containerd[1438]: time="2025-09-10T00:20:45.092649968Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.606972ms" Sep 10 00:20:45.094243 containerd[1438]: time="2025-09-10T00:20:45.094111336Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 487.01711ms" Sep 10 00:20:45.096570 containerd[1438]: time="2025-09-10T00:20:45.096542156Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.926306ms" Sep 10 00:20:45.159996 kubelet[2105]: W0910 00:20:45.159888 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Sep 10 00:20:45.159996 kubelet[2105]: E0910 00:20:45.159965 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.141:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:20:45.177824 kubelet[2105]: W0910 00:20:45.177752 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Sep 10 00:20:45.177824 kubelet[2105]: E0910 00:20:45.177822 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.141:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:20:45.193398 containerd[1438]: time="2025-09-10T00:20:45.192991598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:20:45.194175 containerd[1438]: time="2025-09-10T00:20:45.194110869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:20:45.194175 containerd[1438]: time="2025-09-10T00:20:45.194153407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:20:45.194175 containerd[1438]: time="2025-09-10T00:20:45.194165669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:20:45.194892 containerd[1438]: time="2025-09-10T00:20:45.194808642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:20:45.195290 containerd[1438]: time="2025-09-10T00:20:45.195160524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:20:45.195290 containerd[1438]: time="2025-09-10T00:20:45.195180255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:20:45.195290 containerd[1438]: time="2025-09-10T00:20:45.195270362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:20:45.199451 containerd[1438]: time="2025-09-10T00:20:45.199301745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:20:45.199451 containerd[1438]: time="2025-09-10T00:20:45.199351991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:20:45.199451 containerd[1438]: time="2025-09-10T00:20:45.199378232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:20:45.199574 containerd[1438]: time="2025-09-10T00:20:45.199513553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:20:45.219235 systemd[1]: Started cri-containerd-a59c52d3bad6c13fc24cd5a504ed8d184941eceb446559d1301b4a0e16d406d1.scope - libcontainer container a59c52d3bad6c13fc24cd5a504ed8d184941eceb446559d1301b4a0e16d406d1. Sep 10 00:20:45.220359 systemd[1]: Started cri-containerd-c4cc512bb2dbc54c9cc48eeb048af530e615d5b20550c8c11e1f9e4454805da9.scope - libcontainer container c4cc512bb2dbc54c9cc48eeb048af530e615d5b20550c8c11e1f9e4454805da9. Sep 10 00:20:45.222806 systemd[1]: Started cri-containerd-47126af0dbd8e4577a4544ca3201831b6311c7425ad30cc892f6da8f2eec4cb2.scope - libcontainer container 47126af0dbd8e4577a4544ca3201831b6311c7425ad30cc892f6da8f2eec4cb2. Sep 10 00:20:45.252368 containerd[1438]: time="2025-09-10T00:20:45.252312038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a59c52d3bad6c13fc24cd5a504ed8d184941eceb446559d1301b4a0e16d406d1\"" Sep 10 00:20:45.253601 kubelet[2105]: E0910 00:20:45.253509 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:45.257079 containerd[1438]: time="2025-09-10T00:20:45.257027654Z" level=info msg="CreateContainer within sandbox \"a59c52d3bad6c13fc24cd5a504ed8d184941eceb446559d1301b4a0e16d406d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:20:45.258827 containerd[1438]: time="2025-09-10T00:20:45.258795610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"47126af0dbd8e4577a4544ca3201831b6311c7425ad30cc892f6da8f2eec4cb2\"" Sep 10 00:20:45.260014 kubelet[2105]: E0910 00:20:45.259860 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:45.260472 containerd[1438]: time="2025-09-10T00:20:45.260386867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b975c915421df1193ed39dacf111e57a,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4cc512bb2dbc54c9cc48eeb048af530e615d5b20550c8c11e1f9e4454805da9\"" Sep 10 00:20:45.261150 kubelet[2105]: E0910 00:20:45.261122 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:45.261628 containerd[1438]: time="2025-09-10T00:20:45.261523433Z" level=info msg="CreateContainer within sandbox \"47126af0dbd8e4577a4544ca3201831b6311c7425ad30cc892f6da8f2eec4cb2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:20:45.263106 containerd[1438]: time="2025-09-10T00:20:45.262770437Z" level=info msg="CreateContainer within sandbox \"c4cc512bb2dbc54c9cc48eeb048af530e615d5b20550c8c11e1f9e4454805da9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:20:45.273581 containerd[1438]: time="2025-09-10T00:20:45.273545329Z" level=info msg="CreateContainer within sandbox \"a59c52d3bad6c13fc24cd5a504ed8d184941eceb446559d1301b4a0e16d406d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"014c1fd65ec9ea2eed6c60b86babd5c124d533ad82b5adf67f17c0108ea31b71\"" Sep 10 00:20:45.274163 containerd[1438]: time="2025-09-10T00:20:45.274135540Z" level=info msg="StartContainer for \"014c1fd65ec9ea2eed6c60b86babd5c124d533ad82b5adf67f17c0108ea31b71\"" Sep 10 00:20:45.279194 containerd[1438]: time="2025-09-10T00:20:45.279163895Z" level=info msg="CreateContainer within sandbox \"47126af0dbd8e4577a4544ca3201831b6311c7425ad30cc892f6da8f2eec4cb2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7527bcedf37890e11ed42f7ee060596e03fec745279009878c98400cc8a8d92b\"" Sep 10 00:20:45.279628 containerd[1438]: time="2025-09-10T00:20:45.279575448Z" level=info msg="StartContainer for \"7527bcedf37890e11ed42f7ee060596e03fec745279009878c98400cc8a8d92b\"" Sep 10 00:20:45.280628 containerd[1438]: time="2025-09-10T00:20:45.280576694Z" level=info msg="CreateContainer within sandbox \"c4cc512bb2dbc54c9cc48eeb048af530e615d5b20550c8c11e1f9e4454805da9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"341317c837c1e9c72bdfeb1cd3bfc9ac55155a6e0fe4b1695cf8573dee6e2017\"" Sep 10 00:20:45.281011 containerd[1438]: time="2025-09-10T00:20:45.280916154Z" level=info msg="StartContainer for \"341317c837c1e9c72bdfeb1cd3bfc9ac55155a6e0fe4b1695cf8573dee6e2017\"" Sep 10 00:20:45.301183 systemd[1]: Started cri-containerd-014c1fd65ec9ea2eed6c60b86babd5c124d533ad82b5adf67f17c0108ea31b71.scope - libcontainer container 014c1fd65ec9ea2eed6c60b86babd5c124d533ad82b5adf67f17c0108ea31b71. Sep 10 00:20:45.304369 systemd[1]: Started cri-containerd-341317c837c1e9c72bdfeb1cd3bfc9ac55155a6e0fe4b1695cf8573dee6e2017.scope - libcontainer container 341317c837c1e9c72bdfeb1cd3bfc9ac55155a6e0fe4b1695cf8573dee6e2017. Sep 10 00:20:45.305527 systemd[1]: Started cri-containerd-7527bcedf37890e11ed42f7ee060596e03fec745279009878c98400cc8a8d92b.scope - libcontainer container 7527bcedf37890e11ed42f7ee060596e03fec745279009878c98400cc8a8d92b. Sep 10 00:20:45.336945 containerd[1438]: time="2025-09-10T00:20:45.336774773Z" level=info msg="StartContainer for \"014c1fd65ec9ea2eed6c60b86babd5c124d533ad82b5adf67f17c0108ea31b71\" returns successfully" Sep 10 00:20:45.341479 containerd[1438]: time="2025-09-10T00:20:45.341449848Z" level=info msg="StartContainer for \"7527bcedf37890e11ed42f7ee060596e03fec745279009878c98400cc8a8d92b\" returns successfully" Sep 10 00:20:45.344628 kubelet[2105]: E0910 00:20:45.344464 2105 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.141:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.141:6443: connect: connection refused" interval="1.6s" Sep 10 00:20:45.349984 containerd[1438]: time="2025-09-10T00:20:45.349948652Z" level=info msg="StartContainer for \"341317c837c1e9c72bdfeb1cd3bfc9ac55155a6e0fe4b1695cf8573dee6e2017\" returns successfully" Sep 10 00:20:45.368619 kubelet[2105]: W0910 00:20:45.368555 2105 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.141:6443: connect: connection refused Sep 10 00:20:45.368776 kubelet[2105]: E0910 00:20:45.368632 2105 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.141:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.141:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:20:45.570812 kubelet[2105]: I0910 00:20:45.570786 2105 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:20:45.963677 kubelet[2105]: E0910 00:20:45.963649 2105 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:20:45.964329 kubelet[2105]: E0910 00:20:45.963786 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:45.964624 kubelet[2105]: E0910 00:20:45.964605 2105 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:20:45.964793 kubelet[2105]: E0910 00:20:45.964778 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:45.965831 kubelet[2105]: E0910 00:20:45.965807 2105 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:20:45.965986 kubelet[2105]: E0910 00:20:45.965973 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:46.967565 kubelet[2105]: E0910 00:20:46.967315 2105 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:20:46.967565 kubelet[2105]: E0910 00:20:46.967418 2105 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:20:46.967565 kubelet[2105]: E0910 00:20:46.967443 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:46.967565 kubelet[2105]: E0910 00:20:46.967511 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:47.073424 kubelet[2105]: E0910 00:20:47.073380 2105 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:20:47.146562 kubelet[2105]: I0910 00:20:47.146523 2105 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:20:47.146562 kubelet[2105]: E0910 00:20:47.146560 2105 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 00:20:47.155029 kubelet[2105]: E0910 00:20:47.155002 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:47.255858 kubelet[2105]: E0910 00:20:47.255721 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:47.356458 kubelet[2105]: E0910 00:20:47.356384 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:47.456664 kubelet[2105]: E0910 00:20:47.456621 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:47.557330 kubelet[2105]: E0910 00:20:47.557198 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:47.657770 kubelet[2105]: E0910 00:20:47.657726 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:47.758282 kubelet[2105]: E0910 00:20:47.758234 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:47.858991 kubelet[2105]: E0910 00:20:47.858802 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:47.959440 kubelet[2105]: E0910 00:20:47.959403 2105 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:48.041730 kubelet[2105]: I0910 00:20:48.041693 2105 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:48.052025 kubelet[2105]: I0910 00:20:48.051980 2105 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:48.055620 kubelet[2105]: I0910 00:20:48.055583 2105 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:20:48.852326 systemd[1]: Reloading requested from client PID 2385 ('systemctl') (unit session-7.scope)... Sep 10 00:20:48.852359 systemd[1]: Reloading... Sep 10 00:20:48.913083 zram_generator::config[2424]: No configuration found. Sep 10 00:20:48.930517 kubelet[2105]: I0910 00:20:48.930435 2105 apiserver.go:52] "Watching apiserver" Sep 10 00:20:48.934070 kubelet[2105]: E0910 00:20:48.932725 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:48.934070 kubelet[2105]: E0910 00:20:48.932875 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:48.934070 kubelet[2105]: E0910 00:20:48.933497 2105 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:48.940122 kubelet[2105]: I0910 00:20:48.940077 2105 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:20:49.068450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:20:49.133724 systemd[1]: Reloading finished in 281 ms. Sep 10 00:20:49.164504 kubelet[2105]: I0910 00:20:49.164436 2105 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:20:49.164520 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:20:49.176377 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:20:49.176603 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:20:49.193372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:20:49.294749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:20:49.298517 (kubelet)[2466]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:20:49.333266 kubelet[2466]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:20:49.333266 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:20:49.333266 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:20:49.333742 kubelet[2466]: I0910 00:20:49.333323 2466 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:20:49.339840 kubelet[2466]: I0910 00:20:49.339802 2466 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 00:20:49.339840 kubelet[2466]: I0910 00:20:49.339833 2466 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:20:49.340092 kubelet[2466]: I0910 00:20:49.340075 2466 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 00:20:49.341240 kubelet[2466]: I0910 00:20:49.341219 2466 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:20:49.343524 kubelet[2466]: I0910 00:20:49.343503 2466 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:20:49.345996 kubelet[2466]: E0910 00:20:49.345957 2466 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:20:49.345996 kubelet[2466]: I0910 00:20:49.345987 2466 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:20:49.350393 kubelet[2466]: I0910 00:20:49.350359 2466 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:20:49.350773 kubelet[2466]: I0910 00:20:49.350748 2466 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:20:49.350961 kubelet[2466]: I0910 00:20:49.350775 2466 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:20:49.351032 kubelet[2466]: I0910 00:20:49.350971 2466 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:20:49.351032 kubelet[2466]: I0910 00:20:49.350980 2466 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 00:20:49.351032 kubelet[2466]: I0910 00:20:49.351019 2466 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:20:49.351170 kubelet[2466]: I0910 00:20:49.351159 2466 kubelet.go:446] "Attempting to sync node with API server" Sep 10 00:20:49.351194 kubelet[2466]: I0910 00:20:49.351172 2466 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:20:49.351214 kubelet[2466]: I0910 00:20:49.351204 2466 kubelet.go:352] "Adding apiserver pod source" Sep 10 00:20:49.351235 kubelet[2466]: I0910 00:20:49.351214 2466 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:20:49.351959 kubelet[2466]: I0910 00:20:49.351938 2466 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:20:49.352446 kubelet[2466]: I0910 00:20:49.352425 2466 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:20:49.352845 kubelet[2466]: I0910 00:20:49.352818 2466 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:20:49.352888 kubelet[2466]: I0910 00:20:49.352854 2466 server.go:1287] "Started kubelet" Sep 10 00:20:49.355278 kubelet[2466]: I0910 00:20:49.353034 2466 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:20:49.355278 kubelet[2466]: I0910 00:20:49.353161 2466 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:20:49.355278 kubelet[2466]: I0910 00:20:49.353370 2466 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:20:49.355278 kubelet[2466]: I0910 00:20:49.353985 2466 server.go:479] "Adding debug handlers to kubelet server" Sep 10 00:20:49.355278 kubelet[2466]: I0910 00:20:49.354886 2466 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:20:49.355278 kubelet[2466]: I0910 00:20:49.355022 2466 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:20:49.355278 kubelet[2466]: E0910 00:20:49.355226 2466 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:20:49.355278 kubelet[2466]: I0910 00:20:49.355257 2466 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:20:49.355479 kubelet[2466]: I0910 00:20:49.355409 2466 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:20:49.356034 kubelet[2466]: I0910 00:20:49.355524 2466 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:20:49.358860 kubelet[2466]: I0910 00:20:49.357733 2466 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:20:49.358860 kubelet[2466]: I0910 00:20:49.357752 2466 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:20:49.358860 kubelet[2466]: I0910 00:20:49.357822 2466 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:20:49.358860 kubelet[2466]: E0910 00:20:49.358641 2466 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:20:49.385718 kubelet[2466]: I0910 00:20:49.385611 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:20:49.390382 kubelet[2466]: I0910 00:20:49.390362 2466 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:20:49.390382 kubelet[2466]: I0910 00:20:49.390388 2466 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 00:20:49.390476 kubelet[2466]: I0910 00:20:49.390408 2466 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:20:49.390476 kubelet[2466]: I0910 00:20:49.390416 2466 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 00:20:49.390703 kubelet[2466]: E0910 00:20:49.390480 2466 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:20:49.406457 kubelet[2466]: I0910 00:20:49.406433 2466 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:20:49.406457 kubelet[2466]: I0910 00:20:49.406454 2466 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:20:49.406548 kubelet[2466]: I0910 00:20:49.406473 2466 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:20:49.406649 kubelet[2466]: I0910 00:20:49.406634 2466 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:20:49.406680 kubelet[2466]: I0910 00:20:49.406651 2466 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:20:49.406680 kubelet[2466]: I0910 00:20:49.406669 2466 policy_none.go:49] "None policy: Start" Sep 10 00:20:49.406680 kubelet[2466]: I0910 00:20:49.406677 2466 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:20:49.406743 kubelet[2466]: I0910 00:20:49.406700 2466 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:20:49.406804 kubelet[2466]: I0910 00:20:49.406792 2466 state_mem.go:75] "Updated machine memory state" Sep 10 00:20:49.409725 kubelet[2466]: I0910 00:20:49.409707 2466 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:20:49.409862 kubelet[2466]: I0910 00:20:49.409846 2466 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:20:49.409902 kubelet[2466]: I0910 00:20:49.409864 2466 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:20:49.410373 kubelet[2466]: I0910 00:20:49.410356 2466 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:20:49.412345 kubelet[2466]: E0910 00:20:49.412150 2466 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:20:49.491943 kubelet[2466]: I0910 00:20:49.491708 2466 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:49.491943 kubelet[2466]: I0910 00:20:49.491831 2466 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:49.491943 kubelet[2466]: I0910 00:20:49.491891 2466 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:20:49.497200 kubelet[2466]: E0910 00:20:49.497173 2466 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 00:20:49.497282 kubelet[2466]: E0910 00:20:49.497248 2466 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:49.497355 kubelet[2466]: E0910 00:20:49.497339 2466 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:49.513673 kubelet[2466]: I0910 00:20:49.513650 2466 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:20:49.520009 kubelet[2466]: I0910 00:20:49.519901 2466 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 00:20:49.520009 kubelet[2466]: I0910 00:20:49.519972 2466 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:20:49.657492 kubelet[2466]: I0910 00:20:49.657300 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b975c915421df1193ed39dacf111e57a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b975c915421df1193ed39dacf111e57a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:49.657492 kubelet[2466]: I0910 00:20:49.657335 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:49.657492 kubelet[2466]: I0910 00:20:49.657364 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:49.657492 kubelet[2466]: I0910 00:20:49.657379 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:20:49.657492 kubelet[2466]: I0910 00:20:49.657400 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b975c915421df1193ed39dacf111e57a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b975c915421df1193ed39dacf111e57a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:49.657839 kubelet[2466]: I0910 00:20:49.657416 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b975c915421df1193ed39dacf111e57a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b975c915421df1193ed39dacf111e57a\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:49.657839 kubelet[2466]: I0910 00:20:49.657766 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:49.657839 kubelet[2466]: I0910 00:20:49.657803 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:49.657839 kubelet[2466]: I0910 00:20:49.657820 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:20:49.798292 kubelet[2466]: E0910 00:20:49.798260 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:49.798373 kubelet[2466]: E0910 00:20:49.798338 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:49.798476 kubelet[2466]: E0910 00:20:49.798450 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:50.351851 kubelet[2466]: I0910 00:20:50.351817 2466 apiserver.go:52] "Watching apiserver" Sep 10 00:20:50.356379 kubelet[2466]: I0910 00:20:50.356334 2466 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:20:50.401307 kubelet[2466]: I0910 00:20:50.400033 2466 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:50.401307 kubelet[2466]: E0910 00:20:50.400478 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:50.401766 kubelet[2466]: E0910 00:20:50.400042 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:50.428102 kubelet[2466]: E0910 00:20:50.427944 2466 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:20:50.428188 kubelet[2466]: E0910 00:20:50.428101 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:50.436570 kubelet[2466]: I0910 00:20:50.436509 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.436495288 podStartE2EDuration="2.436495288s" podCreationTimestamp="2025-09-10 00:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:20:50.428369422 +0000 UTC m=+1.126489507" watchObservedRunningTime="2025-09-10 00:20:50.436495288 +0000 UTC m=+1.134615334" Sep 10 00:20:50.445133 kubelet[2466]: I0910 00:20:50.445087 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.445073453 podStartE2EDuration="2.445073453s" podCreationTimestamp="2025-09-10 00:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:20:50.436786628 +0000 UTC m=+1.134906674" watchObservedRunningTime="2025-09-10 00:20:50.445073453 +0000 UTC m=+1.143193499" Sep 10 00:20:50.445263 kubelet[2466]: I0910 00:20:50.445235 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.445229575 podStartE2EDuration="2.445229575s" podCreationTimestamp="2025-09-10 00:20:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:20:50.44485318 +0000 UTC m=+1.142973226" watchObservedRunningTime="2025-09-10 00:20:50.445229575 +0000 UTC m=+1.143349621" Sep 10 00:20:51.401523 kubelet[2466]: E0910 00:20:51.401478 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:51.401861 kubelet[2466]: E0910 00:20:51.401534 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:54.264254 kubelet[2466]: E0910 00:20:54.264177 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:55.144218 kubelet[2466]: I0910 00:20:55.144141 2466 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:20:55.144496 containerd[1438]: time="2025-09-10T00:20:55.144464589Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:20:55.144958 kubelet[2466]: I0910 00:20:55.144934 2466 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:20:56.039630 systemd[1]: Created slice kubepods-besteffort-pod93abd383_1585_454e_9db0_dc93da3789c1.slice - libcontainer container kubepods-besteffort-pod93abd383_1585_454e_9db0_dc93da3789c1.slice. Sep 10 00:20:56.104492 kubelet[2466]: I0910 00:20:56.104437 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93abd383-1585-454e-9db0-dc93da3789c1-kube-proxy\") pod \"kube-proxy-kbhqk\" (UID: \"93abd383-1585-454e-9db0-dc93da3789c1\") " pod="kube-system/kube-proxy-kbhqk" Sep 10 00:20:56.104492 kubelet[2466]: I0910 00:20:56.104490 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93abd383-1585-454e-9db0-dc93da3789c1-lib-modules\") pod \"kube-proxy-kbhqk\" (UID: \"93abd383-1585-454e-9db0-dc93da3789c1\") " pod="kube-system/kube-proxy-kbhqk" Sep 10 00:20:56.104799 kubelet[2466]: I0910 00:20:56.104515 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fkv4\" (UniqueName: \"kubernetes.io/projected/93abd383-1585-454e-9db0-dc93da3789c1-kube-api-access-4fkv4\") pod \"kube-proxy-kbhqk\" (UID: \"93abd383-1585-454e-9db0-dc93da3789c1\") " pod="kube-system/kube-proxy-kbhqk" Sep 10 00:20:56.104799 kubelet[2466]: I0910 00:20:56.104534 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93abd383-1585-454e-9db0-dc93da3789c1-xtables-lock\") pod \"kube-proxy-kbhqk\" (UID: \"93abd383-1585-454e-9db0-dc93da3789c1\") " pod="kube-system/kube-proxy-kbhqk" Sep 10 00:20:56.251901 systemd[1]: Created slice kubepods-besteffort-podbb383d17_c151_43ec_94eb_e9c05160dae0.slice - libcontainer container kubepods-besteffort-podbb383d17_c151_43ec_94eb_e9c05160dae0.slice. Sep 10 00:20:56.305736 kubelet[2466]: I0910 00:20:56.305637 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bb383d17-c151-43ec-94eb-e9c05160dae0-var-lib-calico\") pod \"tigera-operator-755d956888-xr49x\" (UID: \"bb383d17-c151-43ec-94eb-e9c05160dae0\") " pod="tigera-operator/tigera-operator-755d956888-xr49x" Sep 10 00:20:56.305736 kubelet[2466]: I0910 00:20:56.305678 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjmwd\" (UniqueName: \"kubernetes.io/projected/bb383d17-c151-43ec-94eb-e9c05160dae0-kube-api-access-hjmwd\") pod \"tigera-operator-755d956888-xr49x\" (UID: \"bb383d17-c151-43ec-94eb-e9c05160dae0\") " pod="tigera-operator/tigera-operator-755d956888-xr49x" Sep 10 00:20:56.349062 kubelet[2466]: E0910 00:20:56.349012 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:56.350012 containerd[1438]: time="2025-09-10T00:20:56.349518501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbhqk,Uid:93abd383-1585-454e-9db0-dc93da3789c1,Namespace:kube-system,Attempt:0,}" Sep 10 00:20:56.368193 containerd[1438]: time="2025-09-10T00:20:56.367946626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:20:56.368193 containerd[1438]: time="2025-09-10T00:20:56.368000994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:20:56.368193 containerd[1438]: time="2025-09-10T00:20:56.368016357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:20:56.368193 containerd[1438]: time="2025-09-10T00:20:56.368116172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:20:56.379247 systemd[1]: run-containerd-runc-k8s.io-b7d082ad8fb57c2a27d5d673cb21d6f16b62e85012186af92a5b3c9533cdfd2c-runc.fIkknf.mount: Deactivated successfully. Sep 10 00:20:56.391197 systemd[1]: Started cri-containerd-b7d082ad8fb57c2a27d5d673cb21d6f16b62e85012186af92a5b3c9533cdfd2c.scope - libcontainer container b7d082ad8fb57c2a27d5d673cb21d6f16b62e85012186af92a5b3c9533cdfd2c. Sep 10 00:20:56.408110 containerd[1438]: time="2025-09-10T00:20:56.408073500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kbhqk,Uid:93abd383-1585-454e-9db0-dc93da3789c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7d082ad8fb57c2a27d5d673cb21d6f16b62e85012186af92a5b3c9533cdfd2c\"" Sep 10 00:20:56.409703 kubelet[2466]: E0910 00:20:56.409668 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:56.411924 containerd[1438]: time="2025-09-10T00:20:56.411870326Z" level=info msg="CreateContainer within sandbox \"b7d082ad8fb57c2a27d5d673cb21d6f16b62e85012186af92a5b3c9533cdfd2c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:20:56.423857 containerd[1438]: time="2025-09-10T00:20:56.423819211Z" level=info msg="CreateContainer within sandbox \"b7d082ad8fb57c2a27d5d673cb21d6f16b62e85012186af92a5b3c9533cdfd2c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"18523b6d844f3ca5cb465c9754a3638d7ca4af4c0a9d663e59ae98220b7fa0f4\"" Sep 10 00:20:56.424289 containerd[1438]: time="2025-09-10T00:20:56.424267680Z" level=info msg="StartContainer for \"18523b6d844f3ca5cb465c9754a3638d7ca4af4c0a9d663e59ae98220b7fa0f4\"" Sep 10 00:20:56.455221 systemd[1]: Started cri-containerd-18523b6d844f3ca5cb465c9754a3638d7ca4af4c0a9d663e59ae98220b7fa0f4.scope - libcontainer container 18523b6d844f3ca5cb465c9754a3638d7ca4af4c0a9d663e59ae98220b7fa0f4. Sep 10 00:20:56.474905 containerd[1438]: time="2025-09-10T00:20:56.474871051Z" level=info msg="StartContainer for \"18523b6d844f3ca5cb465c9754a3638d7ca4af4c0a9d663e59ae98220b7fa0f4\" returns successfully" Sep 10 00:20:56.555949 containerd[1438]: time="2025-09-10T00:20:56.555817387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-xr49x,Uid:bb383d17-c151-43ec-94eb-e9c05160dae0,Namespace:tigera-operator,Attempt:0,}" Sep 10 00:20:56.573153 containerd[1438]: time="2025-09-10T00:20:56.572991678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:20:56.573153 containerd[1438]: time="2025-09-10T00:20:56.573111176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:20:56.573153 containerd[1438]: time="2025-09-10T00:20:56.573126979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:20:56.573372 containerd[1438]: time="2025-09-10T00:20:56.573244437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:20:56.598231 systemd[1]: Started cri-containerd-c4923d60280a321f9b1943debe3b90721d43f73f41452e042622cbff730bb96e.scope - libcontainer container c4923d60280a321f9b1943debe3b90721d43f73f41452e042622cbff730bb96e. Sep 10 00:20:56.627112 containerd[1438]: time="2025-09-10T00:20:56.627074106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-xr49x,Uid:bb383d17-c151-43ec-94eb-e9c05160dae0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c4923d60280a321f9b1943debe3b90721d43f73f41452e042622cbff730bb96e\"" Sep 10 00:20:56.628940 containerd[1438]: time="2025-09-10T00:20:56.628881705Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 00:20:57.412971 kubelet[2466]: E0910 00:20:57.412944 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:57.422454 kubelet[2466]: I0910 00:20:57.422318 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kbhqk" podStartSLOduration=1.422305915 podStartE2EDuration="1.422305915s" podCreationTimestamp="2025-09-10 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:20:57.422043957 +0000 UTC m=+8.120164003" watchObservedRunningTime="2025-09-10 00:20:57.422305915 +0000 UTC m=+8.120425961" Sep 10 00:20:58.230951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373520604.mount: Deactivated successfully. Sep 10 00:20:58.560246 containerd[1438]: time="2025-09-10T00:20:58.560142638Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:58.561840 containerd[1438]: time="2025-09-10T00:20:58.561810748Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 00:20:58.562752 containerd[1438]: time="2025-09-10T00:20:58.562705032Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:58.564893 containerd[1438]: time="2025-09-10T00:20:58.564864130Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:20:58.565803 containerd[1438]: time="2025-09-10T00:20:58.565692965Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.936778334s" Sep 10 00:20:58.565803 containerd[1438]: time="2025-09-10T00:20:58.565724769Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 00:20:58.568807 containerd[1438]: time="2025-09-10T00:20:58.568776590Z" level=info msg="CreateContainer within sandbox \"c4923d60280a321f9b1943debe3b90721d43f73f41452e042622cbff730bb96e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 00:20:58.577894 containerd[1438]: time="2025-09-10T00:20:58.577863406Z" level=info msg="CreateContainer within sandbox \"c4923d60280a321f9b1943debe3b90721d43f73f41452e042622cbff730bb96e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"49f780fdc789a941b124d238f7cfe9db9e588216f0d8c01feeb3f2d0f7579465\"" Sep 10 00:20:58.578595 containerd[1438]: time="2025-09-10T00:20:58.578569263Z" level=info msg="StartContainer for \"49f780fdc789a941b124d238f7cfe9db9e588216f0d8c01feeb3f2d0f7579465\"" Sep 10 00:20:58.605273 systemd[1]: Started cri-containerd-49f780fdc789a941b124d238f7cfe9db9e588216f0d8c01feeb3f2d0f7579465.scope - libcontainer container 49f780fdc789a941b124d238f7cfe9db9e588216f0d8c01feeb3f2d0f7579465. Sep 10 00:20:58.626162 containerd[1438]: time="2025-09-10T00:20:58.626119312Z" level=info msg="StartContainer for \"49f780fdc789a941b124d238f7cfe9db9e588216f0d8c01feeb3f2d0f7579465\" returns successfully" Sep 10 00:20:58.760151 kubelet[2466]: E0910 00:20:58.760091 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:59.233314 kubelet[2466]: E0910 00:20:59.232947 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:59.416784 kubelet[2466]: E0910 00:20:59.416755 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:20:59.417035 kubelet[2466]: E0910 00:20:59.417013 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:03.867481 sudo[1619]: pam_unix(sudo:session): session closed for user root Sep 10 00:21:03.871626 sshd[1616]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:03.877514 systemd[1]: sshd@6-10.0.0.141:22-10.0.0.1:38124.service: Deactivated successfully. Sep 10 00:21:03.881955 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:21:03.883121 systemd[1]: session-7.scope: Consumed 6.488s CPU time, 151.1M memory peak, 0B memory swap peak. Sep 10 00:21:03.883878 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:21:03.885377 systemd-logind[1422]: Removed session 7. Sep 10 00:21:04.280285 kubelet[2466]: E0910 00:21:04.280179 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:04.299555 kubelet[2466]: I0910 00:21:04.299495 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-xr49x" podStartSLOduration=6.361167052 podStartE2EDuration="8.299481248s" podCreationTimestamp="2025-09-10 00:20:56 +0000 UTC" firstStartedPulling="2025-09-10 00:20:56.628307977 +0000 UTC m=+7.326428023" lastFinishedPulling="2025-09-10 00:20:58.566622213 +0000 UTC m=+9.264742219" observedRunningTime="2025-09-10 00:20:59.440900142 +0000 UTC m=+10.139020148" watchObservedRunningTime="2025-09-10 00:21:04.299481248 +0000 UTC m=+14.997601254" Sep 10 00:21:06.132186 update_engine[1425]: I20250910 00:21:06.132114 1425 update_attempter.cc:509] Updating boot flags... Sep 10 00:21:06.180926 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2890) Sep 10 00:21:06.228145 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2893) Sep 10 00:21:08.390295 systemd[1]: Created slice kubepods-besteffort-pod0d1520f7_f8e3_4f89_8f91_c8ab40248524.slice - libcontainer container kubepods-besteffort-pod0d1520f7_f8e3_4f89_8f91_c8ab40248524.slice. Sep 10 00:21:08.404224 kubelet[2466]: I0910 00:21:08.404177 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d1520f7-f8e3-4f89-8f91-c8ab40248524-tigera-ca-bundle\") pod \"calico-typha-7f7895dc6d-m2gdv\" (UID: \"0d1520f7-f8e3-4f89-8f91-c8ab40248524\") " pod="calico-system/calico-typha-7f7895dc6d-m2gdv" Sep 10 00:21:08.404224 kubelet[2466]: I0910 00:21:08.404224 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0d1520f7-f8e3-4f89-8f91-c8ab40248524-typha-certs\") pod \"calico-typha-7f7895dc6d-m2gdv\" (UID: \"0d1520f7-f8e3-4f89-8f91-c8ab40248524\") " pod="calico-system/calico-typha-7f7895dc6d-m2gdv" Sep 10 00:21:08.404565 kubelet[2466]: I0910 00:21:08.404254 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8r8n\" (UniqueName: \"kubernetes.io/projected/0d1520f7-f8e3-4f89-8f91-c8ab40248524-kube-api-access-t8r8n\") pod \"calico-typha-7f7895dc6d-m2gdv\" (UID: \"0d1520f7-f8e3-4f89-8f91-c8ab40248524\") " pod="calico-system/calico-typha-7f7895dc6d-m2gdv" Sep 10 00:21:08.553325 systemd[1]: Created slice kubepods-besteffort-podc6d46bd8_c4f4_400f_a139_20e57de6e288.slice - libcontainer container kubepods-besteffort-podc6d46bd8_c4f4_400f_a139_20e57de6e288.slice. Sep 10 00:21:08.605809 kubelet[2466]: I0910 00:21:08.605774 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6d46bd8-c4f4-400f-a139-20e57de6e288-tigera-ca-bundle\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606267 kubelet[2466]: I0910 00:21:08.605886 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c6d46bd8-c4f4-400f-a139-20e57de6e288-policysync\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606267 kubelet[2466]: I0910 00:21:08.605905 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c6d46bd8-c4f4-400f-a139-20e57de6e288-var-lib-calico\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606267 kubelet[2466]: I0910 00:21:08.605928 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c6d46bd8-c4f4-400f-a139-20e57de6e288-var-run-calico\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606267 kubelet[2466]: I0910 00:21:08.605942 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c6d46bd8-c4f4-400f-a139-20e57de6e288-node-certs\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606267 kubelet[2466]: I0910 00:21:08.605958 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6d46bd8-c4f4-400f-a139-20e57de6e288-xtables-lock\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606379 kubelet[2466]: I0910 00:21:08.605973 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c6d46bd8-c4f4-400f-a139-20e57de6e288-cni-net-dir\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606379 kubelet[2466]: I0910 00:21:08.605989 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c6d46bd8-c4f4-400f-a139-20e57de6e288-cni-bin-dir\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606379 kubelet[2466]: I0910 00:21:08.606003 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c6d46bd8-c4f4-400f-a139-20e57de6e288-flexvol-driver-host\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606379 kubelet[2466]: I0910 00:21:08.606024 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6d46bd8-c4f4-400f-a139-20e57de6e288-lib-modules\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606379 kubelet[2466]: I0910 00:21:08.606038 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c6d46bd8-c4f4-400f-a139-20e57de6e288-cni-log-dir\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.606477 kubelet[2466]: I0910 00:21:08.606072 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkzbz\" (UniqueName: \"kubernetes.io/projected/c6d46bd8-c4f4-400f-a139-20e57de6e288-kube-api-access-jkzbz\") pod \"calico-node-h7xfz\" (UID: \"c6d46bd8-c4f4-400f-a139-20e57de6e288\") " pod="calico-system/calico-node-h7xfz" Sep 10 00:21:08.696589 kubelet[2466]: E0910 00:21:08.696310 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:08.696723 containerd[1438]: time="2025-09-10T00:21:08.696686549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7895dc6d-m2gdv,Uid:0d1520f7-f8e3-4f89-8f91-c8ab40248524,Namespace:calico-system,Attempt:0,}" Sep 10 00:21:08.713908 kubelet[2466]: E0910 00:21:08.712107 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.713908 kubelet[2466]: W0910 00:21:08.712127 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.713908 kubelet[2466]: E0910 00:21:08.712327 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.729080 kubelet[2466]: E0910 00:21:08.727045 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.729080 kubelet[2466]: W0910 00:21:08.727076 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.729080 kubelet[2466]: E0910 00:21:08.727095 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.735167 kubelet[2466]: E0910 00:21:08.735135 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.735167 kubelet[2466]: W0910 00:21:08.735155 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.735167 kubelet[2466]: E0910 00:21:08.735175 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.736490 containerd[1438]: time="2025-09-10T00:21:08.736405451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:21:08.736577 containerd[1438]: time="2025-09-10T00:21:08.736470816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:21:08.736577 containerd[1438]: time="2025-09-10T00:21:08.736488457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:08.736577 containerd[1438]: time="2025-09-10T00:21:08.736560063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:08.746702 kubelet[2466]: E0910 00:21:08.746642 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2dhv8" podUID="c0d446ad-5074-4566-b66e-22a6ab7ca731" Sep 10 00:21:08.777302 systemd[1]: Started cri-containerd-06930b4970027997c7454954fd724bbb7720bf173237ec10d50168deffe2a443.scope - libcontainer container 06930b4970027997c7454954fd724bbb7720bf173237ec10d50168deffe2a443. Sep 10 00:21:08.785696 kubelet[2466]: E0910 00:21:08.785668 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.785696 kubelet[2466]: W0910 00:21:08.785692 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.785831 kubelet[2466]: E0910 00:21:08.785714 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.786197 kubelet[2466]: E0910 00:21:08.786179 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.786248 kubelet[2466]: W0910 00:21:08.786195 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.786276 kubelet[2466]: E0910 00:21:08.786246 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.786441 kubelet[2466]: E0910 00:21:08.786427 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.786441 kubelet[2466]: W0910 00:21:08.786439 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.786501 kubelet[2466]: E0910 00:21:08.786450 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.786617 kubelet[2466]: E0910 00:21:08.786603 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.786617 kubelet[2466]: W0910 00:21:08.786615 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.786669 kubelet[2466]: E0910 00:21:08.786622 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.786787 kubelet[2466]: E0910 00:21:08.786773 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.786787 kubelet[2466]: W0910 00:21:08.786784 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.786842 kubelet[2466]: E0910 00:21:08.786791 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.786931 kubelet[2466]: E0910 00:21:08.786918 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.786931 kubelet[2466]: W0910 00:21:08.786927 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.786984 kubelet[2466]: E0910 00:21:08.786934 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.787106 kubelet[2466]: E0910 00:21:08.787091 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.787106 kubelet[2466]: W0910 00:21:08.787104 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.787175 kubelet[2466]: E0910 00:21:08.787112 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.787311 kubelet[2466]: E0910 00:21:08.787295 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.787311 kubelet[2466]: W0910 00:21:08.787309 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.787367 kubelet[2466]: E0910 00:21:08.787318 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.787495 kubelet[2466]: E0910 00:21:08.787480 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.787495 kubelet[2466]: W0910 00:21:08.787491 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.787551 kubelet[2466]: E0910 00:21:08.787499 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.787685 kubelet[2466]: E0910 00:21:08.787672 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.787718 kubelet[2466]: W0910 00:21:08.787686 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.787718 kubelet[2466]: E0910 00:21:08.787694 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.787865 kubelet[2466]: E0910 00:21:08.787850 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.787865 kubelet[2466]: W0910 00:21:08.787865 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.787966 kubelet[2466]: E0910 00:21:08.787872 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.788331 kubelet[2466]: E0910 00:21:08.788310 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.788331 kubelet[2466]: W0910 00:21:08.788330 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.788400 kubelet[2466]: E0910 00:21:08.788342 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.788808 kubelet[2466]: E0910 00:21:08.788787 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.788808 kubelet[2466]: W0910 00:21:08.788804 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.788865 kubelet[2466]: E0910 00:21:08.788815 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.788996 kubelet[2466]: E0910 00:21:08.788982 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.788996 kubelet[2466]: W0910 00:21:08.788994 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.789056 kubelet[2466]: E0910 00:21:08.789004 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.790384 kubelet[2466]: E0910 00:21:08.790363 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.790426 kubelet[2466]: W0910 00:21:08.790385 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.790426 kubelet[2466]: E0910 00:21:08.790398 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.790672 kubelet[2466]: E0910 00:21:08.790653 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.790702 kubelet[2466]: W0910 00:21:08.790670 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.790702 kubelet[2466]: E0910 00:21:08.790682 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.790901 kubelet[2466]: E0910 00:21:08.790883 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.790901 kubelet[2466]: W0910 00:21:08.790898 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.790958 kubelet[2466]: E0910 00:21:08.790908 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.793115 kubelet[2466]: E0910 00:21:08.793096 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.793148 kubelet[2466]: W0910 00:21:08.793115 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.793148 kubelet[2466]: E0910 00:21:08.793128 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.793314 kubelet[2466]: E0910 00:21:08.793300 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.793314 kubelet[2466]: W0910 00:21:08.793311 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.793368 kubelet[2466]: E0910 00:21:08.793319 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.793456 kubelet[2466]: E0910 00:21:08.793443 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.793456 kubelet[2466]: W0910 00:21:08.793452 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.793508 kubelet[2466]: E0910 00:21:08.793460 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.807561 kubelet[2466]: E0910 00:21:08.807545 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.807735 kubelet[2466]: W0910 00:21:08.807611 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.807735 kubelet[2466]: E0910 00:21:08.807626 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.807735 kubelet[2466]: I0910 00:21:08.807654 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c0d446ad-5074-4566-b66e-22a6ab7ca731-kubelet-dir\") pod \"csi-node-driver-2dhv8\" (UID: \"c0d446ad-5074-4566-b66e-22a6ab7ca731\") " pod="calico-system/csi-node-driver-2dhv8" Sep 10 00:21:08.807941 kubelet[2466]: E0910 00:21:08.807922 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.807941 kubelet[2466]: W0910 00:21:08.807940 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.808004 kubelet[2466]: E0910 00:21:08.807954 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.808111 kubelet[2466]: E0910 00:21:08.808099 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.808111 kubelet[2466]: W0910 00:21:08.808110 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.808191 kubelet[2466]: E0910 00:21:08.808123 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.809168 kubelet[2466]: E0910 00:21:08.809153 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.809212 kubelet[2466]: W0910 00:21:08.809169 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.809212 kubelet[2466]: E0910 00:21:08.809180 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.809212 kubelet[2466]: I0910 00:21:08.809199 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c0d446ad-5074-4566-b66e-22a6ab7ca731-registration-dir\") pod \"csi-node-driver-2dhv8\" (UID: \"c0d446ad-5074-4566-b66e-22a6ab7ca731\") " pod="calico-system/csi-node-driver-2dhv8" Sep 10 00:21:08.809483 kubelet[2466]: E0910 00:21:08.809467 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.809483 kubelet[2466]: W0910 00:21:08.809482 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.809579 kubelet[2466]: E0910 00:21:08.809493 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.809579 kubelet[2466]: I0910 00:21:08.809509 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c0d446ad-5074-4566-b66e-22a6ab7ca731-socket-dir\") pod \"csi-node-driver-2dhv8\" (UID: \"c0d446ad-5074-4566-b66e-22a6ab7ca731\") " pod="calico-system/csi-node-driver-2dhv8" Sep 10 00:21:08.809797 kubelet[2466]: E0910 00:21:08.809778 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.809797 kubelet[2466]: W0910 00:21:08.809797 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.809854 kubelet[2466]: E0910 00:21:08.809814 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.809968 kubelet[2466]: I0910 00:21:08.809953 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c0d446ad-5074-4566-b66e-22a6ab7ca731-varrun\") pod \"csi-node-driver-2dhv8\" (UID: \"c0d446ad-5074-4566-b66e-22a6ab7ca731\") " pod="calico-system/csi-node-driver-2dhv8" Sep 10 00:21:08.811086 containerd[1438]: time="2025-09-10T00:21:08.811021578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f7895dc6d-m2gdv,Uid:0d1520f7-f8e3-4f89-8f91-c8ab40248524,Namespace:calico-system,Attempt:0,} returns sandbox id \"06930b4970027997c7454954fd724bbb7720bf173237ec10d50168deffe2a443\"" Sep 10 00:21:08.812221 kubelet[2466]: E0910 00:21:08.811801 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:08.812221 kubelet[2466]: E0910 00:21:08.811941 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.812221 kubelet[2466]: W0910 00:21:08.811957 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.812221 kubelet[2466]: E0910 00:21:08.811971 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.812221 kubelet[2466]: E0910 00:21:08.812178 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.812372 kubelet[2466]: W0910 00:21:08.812188 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.812457 kubelet[2466]: E0910 00:21:08.812439 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.812523 containerd[1438]: time="2025-09-10T00:21:08.812490939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 00:21:08.812645 kubelet[2466]: E0910 00:21:08.812628 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.812645 kubelet[2466]: W0910 00:21:08.812643 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.812770 kubelet[2466]: E0910 00:21:08.812690 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.812952 kubelet[2466]: E0910 00:21:08.812936 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.812993 kubelet[2466]: W0910 00:21:08.812952 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.813056 kubelet[2466]: E0910 00:21:08.813034 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.813153 kubelet[2466]: E0910 00:21:08.813107 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.813153 kubelet[2466]: W0910 00:21:08.813123 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.813153 kubelet[2466]: I0910 00:21:08.813126 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs5b5\" (UniqueName: \"kubernetes.io/projected/c0d446ad-5074-4566-b66e-22a6ab7ca731-kube-api-access-xs5b5\") pod \"csi-node-driver-2dhv8\" (UID: \"c0d446ad-5074-4566-b66e-22a6ab7ca731\") " pod="calico-system/csi-node-driver-2dhv8" Sep 10 00:21:08.813153 kubelet[2466]: E0910 00:21:08.813141 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.813298 kubelet[2466]: E0910 00:21:08.813279 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.813298 kubelet[2466]: W0910 00:21:08.813288 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.813298 kubelet[2466]: E0910 00:21:08.813298 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.813537 kubelet[2466]: E0910 00:21:08.813520 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.813537 kubelet[2466]: W0910 00:21:08.813536 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.813597 kubelet[2466]: E0910 00:21:08.813549 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.813898 kubelet[2466]: E0910 00:21:08.813866 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.813898 kubelet[2466]: W0910 00:21:08.813888 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.813973 kubelet[2466]: E0910 00:21:08.813901 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.814710 kubelet[2466]: E0910 00:21:08.814448 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.815155 kubelet[2466]: W0910 00:21:08.815134 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.815155 kubelet[2466]: E0910 00:21:08.815155 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.859558 containerd[1438]: time="2025-09-10T00:21:08.859190334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h7xfz,Uid:c6d46bd8-c4f4-400f-a139-20e57de6e288,Namespace:calico-system,Attempt:0,}" Sep 10 00:21:08.880393 containerd[1438]: time="2025-09-10T00:21:08.879990122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:21:08.880393 containerd[1438]: time="2025-09-10T00:21:08.880064168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:21:08.880393 containerd[1438]: time="2025-09-10T00:21:08.880076809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:08.880393 containerd[1438]: time="2025-09-10T00:21:08.880177817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:08.909373 systemd[1]: Started cri-containerd-67e5e1a8bee4a41b5aa6a217bc139fe0c447fb8481f0af37b37cb219a54ba1c9.scope - libcontainer container 67e5e1a8bee4a41b5aa6a217bc139fe0c447fb8481f0af37b37cb219a54ba1c9. Sep 10 00:21:08.915196 kubelet[2466]: E0910 00:21:08.915164 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.915196 kubelet[2466]: W0910 00:21:08.915186 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.917187 kubelet[2466]: E0910 00:21:08.915205 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.917187 kubelet[2466]: E0910 00:21:08.916638 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.917187 kubelet[2466]: W0910 00:21:08.916651 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.917187 kubelet[2466]: E0910 00:21:08.916671 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.917187 kubelet[2466]: E0910 00:21:08.916854 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.917187 kubelet[2466]: W0910 00:21:08.916863 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.917187 kubelet[2466]: E0910 00:21:08.916929 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.917187 kubelet[2466]: E0910 00:21:08.916994 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.917187 kubelet[2466]: W0910 00:21:08.917001 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.917187 kubelet[2466]: E0910 00:21:08.917036 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.918362 kubelet[2466]: E0910 00:21:08.917252 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.918362 kubelet[2466]: W0910 00:21:08.917261 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.918362 kubelet[2466]: E0910 00:21:08.917275 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.918362 kubelet[2466]: E0910 00:21:08.917433 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.918362 kubelet[2466]: W0910 00:21:08.917440 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.918362 kubelet[2466]: E0910 00:21:08.917449 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.918362 kubelet[2466]: E0910 00:21:08.917584 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.918362 kubelet[2466]: W0910 00:21:08.917592 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.918362 kubelet[2466]: E0910 00:21:08.917600 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.918362 kubelet[2466]: E0910 00:21:08.917832 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.919353 kubelet[2466]: W0910 00:21:08.917844 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.919353 kubelet[2466]: E0910 00:21:08.917856 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.919353 kubelet[2466]: E0910 00:21:08.918021 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.919353 kubelet[2466]: W0910 00:21:08.918029 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.919353 kubelet[2466]: E0910 00:21:08.918041 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.919353 kubelet[2466]: E0910 00:21:08.918708 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.919353 kubelet[2466]: W0910 00:21:08.918721 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.919353 kubelet[2466]: E0910 00:21:08.918793 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.919353 kubelet[2466]: E0910 00:21:08.918961 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.919353 kubelet[2466]: W0910 00:21:08.918985 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.919553 kubelet[2466]: E0910 00:21:08.919044 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.919553 kubelet[2466]: E0910 00:21:08.919200 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.919553 kubelet[2466]: W0910 00:21:08.919208 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.919553 kubelet[2466]: E0910 00:21:08.919279 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.920082 kubelet[2466]: E0910 00:21:08.919954 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.920082 kubelet[2466]: W0910 00:21:08.920018 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.920242 kubelet[2466]: E0910 00:21:08.920185 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.921030 kubelet[2466]: E0910 00:21:08.920340 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.921030 kubelet[2466]: W0910 00:21:08.920353 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.921030 kubelet[2466]: E0910 00:21:08.920409 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.921320 kubelet[2466]: E0910 00:21:08.921289 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.921320 kubelet[2466]: W0910 00:21:08.921308 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.921615 kubelet[2466]: E0910 00:21:08.921589 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.922517 kubelet[2466]: E0910 00:21:08.921783 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.922517 kubelet[2466]: W0910 00:21:08.921793 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.922517 kubelet[2466]: E0910 00:21:08.922059 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.922784 kubelet[2466]: E0910 00:21:08.922761 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.922784 kubelet[2466]: W0910 00:21:08.922781 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.922918 kubelet[2466]: E0910 00:21:08.922902 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.924440 kubelet[2466]: E0910 00:21:08.923225 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.924440 kubelet[2466]: W0910 00:21:08.923249 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.924440 kubelet[2466]: E0910 00:21:08.923418 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.924440 kubelet[2466]: E0910 00:21:08.923937 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.924440 kubelet[2466]: W0910 00:21:08.923949 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.924440 kubelet[2466]: E0910 00:21:08.924226 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.925171 kubelet[2466]: E0910 00:21:08.924741 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.925171 kubelet[2466]: W0910 00:21:08.924856 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.925277 kubelet[2466]: E0910 00:21:08.925041 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.926468 kubelet[2466]: E0910 00:21:08.925439 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.926468 kubelet[2466]: W0910 00:21:08.925456 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.926468 kubelet[2466]: E0910 00:21:08.925744 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.926468 kubelet[2466]: E0910 00:21:08.926168 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.926468 kubelet[2466]: W0910 00:21:08.926179 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.926468 kubelet[2466]: E0910 00:21:08.926302 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.926941 kubelet[2466]: E0910 00:21:08.926910 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.926941 kubelet[2466]: W0910 00:21:08.926928 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.927202 kubelet[2466]: E0910 00:21:08.927173 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.927375 kubelet[2466]: E0910 00:21:08.927353 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.927375 kubelet[2466]: W0910 00:21:08.927370 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.927434 kubelet[2466]: E0910 00:21:08.927423 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.927813 kubelet[2466]: E0910 00:21:08.927779 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.927813 kubelet[2466]: W0910 00:21:08.927797 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.927919 kubelet[2466]: E0910 00:21:08.927901 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.953186 kubelet[2466]: E0910 00:21:08.949582 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:08.953186 kubelet[2466]: W0910 00:21:08.949606 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:08.953186 kubelet[2466]: E0910 00:21:08.949624 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:08.976655 containerd[1438]: time="2025-09-10T00:21:08.976601696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-h7xfz,Uid:c6d46bd8-c4f4-400f-a139-20e57de6e288,Namespace:calico-system,Attempt:0,} returns sandbox id \"67e5e1a8bee4a41b5aa6a217bc139fe0c447fb8481f0af37b37cb219a54ba1c9\"" Sep 10 00:21:09.788720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3779505265.mount: Deactivated successfully. Sep 10 00:21:10.276694 containerd[1438]: time="2025-09-10T00:21:10.276650655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:10.277987 containerd[1438]: time="2025-09-10T00:21:10.277926790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 00:21:10.281321 containerd[1438]: time="2025-09-10T00:21:10.281289481Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.468757779s" Sep 10 00:21:10.281321 containerd[1438]: time="2025-09-10T00:21:10.281323924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 00:21:10.282626 containerd[1438]: time="2025-09-10T00:21:10.282459928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 00:21:10.286808 containerd[1438]: time="2025-09-10T00:21:10.286749848Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:10.288087 containerd[1438]: time="2025-09-10T00:21:10.287617473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:10.300475 containerd[1438]: time="2025-09-10T00:21:10.300421069Z" level=info msg="CreateContainer within sandbox \"06930b4970027997c7454954fd724bbb7720bf173237ec10d50168deffe2a443\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 00:21:10.314218 containerd[1438]: time="2025-09-10T00:21:10.314178616Z" level=info msg="CreateContainer within sandbox \"06930b4970027997c7454954fd724bbb7720bf173237ec10d50168deffe2a443\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9ed6d6537255584d25c36c45dacfb5558297c6a9308aab1cdeb9ac7054df7068\"" Sep 10 00:21:10.314635 containerd[1438]: time="2025-09-10T00:21:10.314611168Z" level=info msg="StartContainer for \"9ed6d6537255584d25c36c45dacfb5558297c6a9308aab1cdeb9ac7054df7068\"" Sep 10 00:21:10.341232 systemd[1]: Started cri-containerd-9ed6d6537255584d25c36c45dacfb5558297c6a9308aab1cdeb9ac7054df7068.scope - libcontainer container 9ed6d6537255584d25c36c45dacfb5558297c6a9308aab1cdeb9ac7054df7068. Sep 10 00:21:10.382266 containerd[1438]: time="2025-09-10T00:21:10.382194012Z" level=info msg="StartContainer for \"9ed6d6537255584d25c36c45dacfb5558297c6a9308aab1cdeb9ac7054df7068\" returns successfully" Sep 10 00:21:10.391217 kubelet[2466]: E0910 00:21:10.391144 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2dhv8" podUID="c0d446ad-5074-4566-b66e-22a6ab7ca731" Sep 10 00:21:10.458999 kubelet[2466]: E0910 00:21:10.458947 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:10.478552 kubelet[2466]: I0910 00:21:10.478486 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7f7895dc6d-m2gdv" podStartSLOduration=1.008792468 podStartE2EDuration="2.478468598s" podCreationTimestamp="2025-09-10 00:21:08 +0000 UTC" firstStartedPulling="2025-09-10 00:21:08.812254439 +0000 UTC m=+19.510374445" lastFinishedPulling="2025-09-10 00:21:10.281930529 +0000 UTC m=+20.980050575" observedRunningTime="2025-09-10 00:21:10.478103931 +0000 UTC m=+21.176223977" watchObservedRunningTime="2025-09-10 00:21:10.478468598 +0000 UTC m=+21.176588644" Sep 10 00:21:10.512438 kubelet[2466]: E0910 00:21:10.512234 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.512438 kubelet[2466]: W0910 00:21:10.512263 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.512438 kubelet[2466]: E0910 00:21:10.512287 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.514271 kubelet[2466]: E0910 00:21:10.514250 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.514512 kubelet[2466]: W0910 00:21:10.514369 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.514512 kubelet[2466]: E0910 00:21:10.514424 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.516002 kubelet[2466]: E0910 00:21:10.515222 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.518641 kubelet[2466]: W0910 00:21:10.515238 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.518641 kubelet[2466]: E0910 00:21:10.517037 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.518641 kubelet[2466]: E0910 00:21:10.517295 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.518641 kubelet[2466]: W0910 00:21:10.517305 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.518641 kubelet[2466]: E0910 00:21:10.517314 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.518641 kubelet[2466]: E0910 00:21:10.517473 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.518641 kubelet[2466]: W0910 00:21:10.517480 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.518641 kubelet[2466]: E0910 00:21:10.517487 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.518641 kubelet[2466]: E0910 00:21:10.517615 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.518641 kubelet[2466]: W0910 00:21:10.517621 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.518902 kubelet[2466]: E0910 00:21:10.517628 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.518902 kubelet[2466]: E0910 00:21:10.517750 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.518902 kubelet[2466]: W0910 00:21:10.517756 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.518902 kubelet[2466]: E0910 00:21:10.517764 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.518902 kubelet[2466]: E0910 00:21:10.517928 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.518902 kubelet[2466]: W0910 00:21:10.517937 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.518902 kubelet[2466]: E0910 00:21:10.517946 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.518902 kubelet[2466]: E0910 00:21:10.518170 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.518902 kubelet[2466]: W0910 00:21:10.518179 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.518902 kubelet[2466]: E0910 00:21:10.518188 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.519131 kubelet[2466]: E0910 00:21:10.518365 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.519131 kubelet[2466]: W0910 00:21:10.518374 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.519131 kubelet[2466]: E0910 00:21:10.518383 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.519131 kubelet[2466]: E0910 00:21:10.518553 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.519131 kubelet[2466]: W0910 00:21:10.518560 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.519131 kubelet[2466]: E0910 00:21:10.518569 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.520085 kubelet[2466]: E0910 00:21:10.519424 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.520085 kubelet[2466]: W0910 00:21:10.519441 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.520085 kubelet[2466]: E0910 00:21:10.519454 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.520085 kubelet[2466]: E0910 00:21:10.519659 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.520085 kubelet[2466]: W0910 00:21:10.519667 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.520085 kubelet[2466]: E0910 00:21:10.519674 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.520085 kubelet[2466]: E0910 00:21:10.519828 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.520085 kubelet[2466]: W0910 00:21:10.519834 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.520085 kubelet[2466]: E0910 00:21:10.519841 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.520085 kubelet[2466]: E0910 00:21:10.519976 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.520389 kubelet[2466]: W0910 00:21:10.519982 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.520389 kubelet[2466]: E0910 00:21:10.519989 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.531005 kubelet[2466]: E0910 00:21:10.530824 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.531005 kubelet[2466]: W0910 00:21:10.530847 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.531005 kubelet[2466]: E0910 00:21:10.530867 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.531183 kubelet[2466]: E0910 00:21:10.531084 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.531183 kubelet[2466]: W0910 00:21:10.531092 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.531183 kubelet[2466]: E0910 00:21:10.531103 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.531424 kubelet[2466]: E0910 00:21:10.531407 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.531424 kubelet[2466]: W0910 00:21:10.531422 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.531495 kubelet[2466]: E0910 00:21:10.531437 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.531969 kubelet[2466]: E0910 00:21:10.531880 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.531969 kubelet[2466]: W0910 00:21:10.531899 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.531969 kubelet[2466]: E0910 00:21:10.531946 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.532388 kubelet[2466]: E0910 00:21:10.532202 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.532388 kubelet[2466]: W0910 00:21:10.532220 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.532388 kubelet[2466]: E0910 00:21:10.532233 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.532886 kubelet[2466]: E0910 00:21:10.532617 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.532886 kubelet[2466]: W0910 00:21:10.532670 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.532886 kubelet[2466]: E0910 00:21:10.532742 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.532886 kubelet[2466]: E0910 00:21:10.532856 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.532886 kubelet[2466]: W0910 00:21:10.532866 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.533024 kubelet[2466]: E0910 00:21:10.532989 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.533024 kubelet[2466]: W0910 00:21:10.532995 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.533379 kubelet[2466]: E0910 00:21:10.533144 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.533379 kubelet[2466]: E0910 00:21:10.533186 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.533379 kubelet[2466]: E0910 00:21:10.533237 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.533379 kubelet[2466]: W0910 00:21:10.533247 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.533379 kubelet[2466]: E0910 00:21:10.533260 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.533623 kubelet[2466]: E0910 00:21:10.533608 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.533623 kubelet[2466]: W0910 00:21:10.533619 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.533878 kubelet[2466]: E0910 00:21:10.533757 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.534043 kubelet[2466]: E0910 00:21:10.533996 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.534043 kubelet[2466]: W0910 00:21:10.534015 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.534043 kubelet[2466]: E0910 00:21:10.534032 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.534671 kubelet[2466]: E0910 00:21:10.534431 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.534671 kubelet[2466]: W0910 00:21:10.534455 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.534671 kubelet[2466]: E0910 00:21:10.534468 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.535124 kubelet[2466]: E0910 00:21:10.535020 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.535442 kubelet[2466]: W0910 00:21:10.535411 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.535750 kubelet[2466]: E0910 00:21:10.535689 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.536490 kubelet[2466]: E0910 00:21:10.536297 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.536753 kubelet[2466]: W0910 00:21:10.536594 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.536753 kubelet[2466]: E0910 00:21:10.536662 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.538000 kubelet[2466]: E0910 00:21:10.537952 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.538000 kubelet[2466]: W0910 00:21:10.537974 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.538314 kubelet[2466]: E0910 00:21:10.538292 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.539940 kubelet[2466]: E0910 00:21:10.539628 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.539940 kubelet[2466]: W0910 00:21:10.539646 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.539940 kubelet[2466]: E0910 00:21:10.539665 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.540687 kubelet[2466]: E0910 00:21:10.540669 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.540863 kubelet[2466]: W0910 00:21:10.540782 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.540943 kubelet[2466]: E0910 00:21:10.540919 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:10.541323 kubelet[2466]: E0910 00:21:10.541268 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:10.541323 kubelet[2466]: W0910 00:21:10.541285 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:10.541323 kubelet[2466]: E0910 00:21:10.541297 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.458928 kubelet[2466]: I0910 00:21:11.458894 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:21:11.459400 kubelet[2466]: E0910 00:21:11.459231 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:11.527022 kubelet[2466]: E0910 00:21:11.526925 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.527022 kubelet[2466]: W0910 00:21:11.526945 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.527022 kubelet[2466]: E0910 00:21:11.526964 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.527858 kubelet[2466]: E0910 00:21:11.527837 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.527858 kubelet[2466]: W0910 00:21:11.527853 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.527923 kubelet[2466]: E0910 00:21:11.527890 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.528446 kubelet[2466]: E0910 00:21:11.528432 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.528446 kubelet[2466]: W0910 00:21:11.528444 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.528520 kubelet[2466]: E0910 00:21:11.528453 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.530166 kubelet[2466]: E0910 00:21:11.528650 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.530166 kubelet[2466]: W0910 00:21:11.528658 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.530166 kubelet[2466]: E0910 00:21:11.528720 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.530166 kubelet[2466]: E0910 00:21:11.528997 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.530166 kubelet[2466]: W0910 00:21:11.529023 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.530166 kubelet[2466]: E0910 00:21:11.529034 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.530166 kubelet[2466]: E0910 00:21:11.529248 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.530166 kubelet[2466]: W0910 00:21:11.529259 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.530166 kubelet[2466]: E0910 00:21:11.529268 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.530166 kubelet[2466]: E0910 00:21:11.529791 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.530435 kubelet[2466]: W0910 00:21:11.529803 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.530435 kubelet[2466]: E0910 00:21:11.529814 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.530927 kubelet[2466]: E0910 00:21:11.530905 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.530927 kubelet[2466]: W0910 00:21:11.530921 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.531001 kubelet[2466]: E0910 00:21:11.530933 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.531275 kubelet[2466]: E0910 00:21:11.531158 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.531275 kubelet[2466]: W0910 00:21:11.531171 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.531275 kubelet[2466]: E0910 00:21:11.531180 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.531424 kubelet[2466]: E0910 00:21:11.531353 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.531424 kubelet[2466]: W0910 00:21:11.531363 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.531424 kubelet[2466]: E0910 00:21:11.531371 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.531543 kubelet[2466]: E0910 00:21:11.531514 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.531543 kubelet[2466]: W0910 00:21:11.531542 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.531608 kubelet[2466]: E0910 00:21:11.531552 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.531854 kubelet[2466]: E0910 00:21:11.531838 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.531854 kubelet[2466]: W0910 00:21:11.531851 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.531942 kubelet[2466]: E0910 00:21:11.531861 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.532022 kubelet[2466]: E0910 00:21:11.532012 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.532022 kubelet[2466]: W0910 00:21:11.532022 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.532097 kubelet[2466]: E0910 00:21:11.532030 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.532183 kubelet[2466]: E0910 00:21:11.532173 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.532183 kubelet[2466]: W0910 00:21:11.532182 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.532237 kubelet[2466]: E0910 00:21:11.532191 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.532421 kubelet[2466]: E0910 00:21:11.532322 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.532421 kubelet[2466]: W0910 00:21:11.532335 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.532421 kubelet[2466]: E0910 00:21:11.532343 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.538910 kubelet[2466]: E0910 00:21:11.538876 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.538910 kubelet[2466]: W0910 00:21:11.538892 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.538910 kubelet[2466]: E0910 00:21:11.538906 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.540492 kubelet[2466]: E0910 00:21:11.539392 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.540492 kubelet[2466]: W0910 00:21:11.539444 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.540492 kubelet[2466]: E0910 00:21:11.539462 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.540492 kubelet[2466]: E0910 00:21:11.539705 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.540492 kubelet[2466]: W0910 00:21:11.539714 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.540492 kubelet[2466]: E0910 00:21:11.539731 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.540492 kubelet[2466]: E0910 00:21:11.540120 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.540492 kubelet[2466]: W0910 00:21:11.540130 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.540492 kubelet[2466]: E0910 00:21:11.540176 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.540492 kubelet[2466]: E0910 00:21:11.540477 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.540743 kubelet[2466]: W0910 00:21:11.540487 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.540743 kubelet[2466]: E0910 00:21:11.540517 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.540782 kubelet[2466]: E0910 00:21:11.540753 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.540782 kubelet[2466]: W0910 00:21:11.540762 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.540823 kubelet[2466]: E0910 00:21:11.540787 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.541254 kubelet[2466]: E0910 00:21:11.541043 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.541254 kubelet[2466]: W0910 00:21:11.541086 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.541254 kubelet[2466]: E0910 00:21:11.541112 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.542447 kubelet[2466]: E0910 00:21:11.541446 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.542447 kubelet[2466]: W0910 00:21:11.541458 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.542447 kubelet[2466]: E0910 00:21:11.541475 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.542447 kubelet[2466]: E0910 00:21:11.541777 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.542447 kubelet[2466]: W0910 00:21:11.541788 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.542447 kubelet[2466]: E0910 00:21:11.541842 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.542447 kubelet[2466]: E0910 00:21:11.542327 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.542447 kubelet[2466]: W0910 00:21:11.542340 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.542447 kubelet[2466]: E0910 00:21:11.542388 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.542748 kubelet[2466]: E0910 00:21:11.542730 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.542748 kubelet[2466]: W0910 00:21:11.542747 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.542841 kubelet[2466]: E0910 00:21:11.542800 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.543127 kubelet[2466]: E0910 00:21:11.543113 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.543127 kubelet[2466]: W0910 00:21:11.543125 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.543193 kubelet[2466]: E0910 00:21:11.543170 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.543485 kubelet[2466]: E0910 00:21:11.543471 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.543485 kubelet[2466]: W0910 00:21:11.543483 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.543542 kubelet[2466]: E0910 00:21:11.543497 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.543867 kubelet[2466]: E0910 00:21:11.543853 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.543900 kubelet[2466]: W0910 00:21:11.543866 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.543927 kubelet[2466]: E0910 00:21:11.543907 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.545177 kubelet[2466]: E0910 00:21:11.544407 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.545177 kubelet[2466]: W0910 00:21:11.544427 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.545177 kubelet[2466]: E0910 00:21:11.544444 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.545177 kubelet[2466]: E0910 00:21:11.544775 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.545177 kubelet[2466]: W0910 00:21:11.544789 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.545177 kubelet[2466]: E0910 00:21:11.544803 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.545177 kubelet[2466]: E0910 00:21:11.545068 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.545177 kubelet[2466]: W0910 00:21:11.545082 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.545177 kubelet[2466]: E0910 00:21:11.545092 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.545425 kubelet[2466]: E0910 00:21:11.545363 2466 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:21:11.545425 kubelet[2466]: W0910 00:21:11.545379 2466 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:21:11.545425 kubelet[2466]: E0910 00:21:11.545389 2466 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:21:11.939132 containerd[1438]: time="2025-09-10T00:21:11.939090756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:11.939839 containerd[1438]: time="2025-09-10T00:21:11.939810447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 00:21:11.940691 containerd[1438]: time="2025-09-10T00:21:11.940662708Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:11.943064 containerd[1438]: time="2025-09-10T00:21:11.943015276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:11.943498 containerd[1438]: time="2025-09-10T00:21:11.943475148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.660980658s" Sep 10 00:21:11.943549 containerd[1438]: time="2025-09-10T00:21:11.943504470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 00:21:11.946006 containerd[1438]: time="2025-09-10T00:21:11.945976687Z" level=info msg="CreateContainer within sandbox \"67e5e1a8bee4a41b5aa6a217bc139fe0c447fb8481f0af37b37cb219a54ba1c9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 00:21:11.957808 containerd[1438]: time="2025-09-10T00:21:11.957755126Z" level=info msg="CreateContainer within sandbox \"67e5e1a8bee4a41b5aa6a217bc139fe0c447fb8481f0af37b37cb219a54ba1c9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2307dc6831e651891ac3a46111e3c86d6c7e10e19ec624089ee27707c79ecd1e\"" Sep 10 00:21:11.959508 containerd[1438]: time="2025-09-10T00:21:11.959474728Z" level=info msg="StartContainer for \"2307dc6831e651891ac3a46111e3c86d6c7e10e19ec624089ee27707c79ecd1e\"" Sep 10 00:21:11.997209 systemd[1]: Started cri-containerd-2307dc6831e651891ac3a46111e3c86d6c7e10e19ec624089ee27707c79ecd1e.scope - libcontainer container 2307dc6831e651891ac3a46111e3c86d6c7e10e19ec624089ee27707c79ecd1e. Sep 10 00:21:12.033199 systemd[1]: cri-containerd-2307dc6831e651891ac3a46111e3c86d6c7e10e19ec624089ee27707c79ecd1e.scope: Deactivated successfully. Sep 10 00:21:12.036558 containerd[1438]: time="2025-09-10T00:21:12.036519265Z" level=info msg="StartContainer for \"2307dc6831e651891ac3a46111e3c86d6c7e10e19ec624089ee27707c79ecd1e\" returns successfully" Sep 10 00:21:12.055495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2307dc6831e651891ac3a46111e3c86d6c7e10e19ec624089ee27707c79ecd1e-rootfs.mount: Deactivated successfully. Sep 10 00:21:12.156682 containerd[1438]: time="2025-09-10T00:21:12.150905370Z" level=info msg="shim disconnected" id=2307dc6831e651891ac3a46111e3c86d6c7e10e19ec624089ee27707c79ecd1e namespace=k8s.io Sep 10 00:21:12.156857 containerd[1438]: time="2025-09-10T00:21:12.156700325Z" level=warning msg="cleaning up after shim disconnected" id=2307dc6831e651891ac3a46111e3c86d6c7e10e19ec624089ee27707c79ecd1e namespace=k8s.io Sep 10 00:21:12.156857 containerd[1438]: time="2025-09-10T00:21:12.156714206Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:21:12.390765 kubelet[2466]: E0910 00:21:12.390714 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2dhv8" podUID="c0d446ad-5074-4566-b66e-22a6ab7ca731" Sep 10 00:21:12.463005 kubelet[2466]: E0910 00:21:12.462904 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:12.464581 containerd[1438]: time="2025-09-10T00:21:12.464534677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 00:21:13.464123 kubelet[2466]: E0910 00:21:13.464095 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:14.391613 kubelet[2466]: E0910 00:21:14.391548 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2dhv8" podUID="c0d446ad-5074-4566-b66e-22a6ab7ca731" Sep 10 00:21:15.532504 containerd[1438]: time="2025-09-10T00:21:15.532456065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:15.533938 containerd[1438]: time="2025-09-10T00:21:15.533912391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 00:21:15.535099 containerd[1438]: time="2025-09-10T00:21:15.534679277Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:15.536581 containerd[1438]: time="2025-09-10T00:21:15.536537628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:15.538115 containerd[1438]: time="2025-09-10T00:21:15.538081480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.07349068s" Sep 10 00:21:15.538115 containerd[1438]: time="2025-09-10T00:21:15.538115242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 00:21:15.540070 containerd[1438]: time="2025-09-10T00:21:15.540021716Z" level=info msg="CreateContainer within sandbox \"67e5e1a8bee4a41b5aa6a217bc139fe0c447fb8481f0af37b37cb219a54ba1c9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 00:21:15.551416 containerd[1438]: time="2025-09-10T00:21:15.551361712Z" level=info msg="CreateContainer within sandbox \"67e5e1a8bee4a41b5aa6a217bc139fe0c447fb8481f0af37b37cb219a54ba1c9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a22a81b6a2de183dd7f7c2d33795a89ec1e648c3f8ee1c06d163bed2e3047fca\"" Sep 10 00:21:15.552262 containerd[1438]: time="2025-09-10T00:21:15.552229844Z" level=info msg="StartContainer for \"a22a81b6a2de183dd7f7c2d33795a89ec1e648c3f8ee1c06d163bed2e3047fca\"" Sep 10 00:21:15.582235 systemd[1]: Started cri-containerd-a22a81b6a2de183dd7f7c2d33795a89ec1e648c3f8ee1c06d163bed2e3047fca.scope - libcontainer container a22a81b6a2de183dd7f7c2d33795a89ec1e648c3f8ee1c06d163bed2e3047fca. Sep 10 00:21:15.606323 containerd[1438]: time="2025-09-10T00:21:15.606281309Z" level=info msg="StartContainer for \"a22a81b6a2de183dd7f7c2d33795a89ec1e648c3f8ee1c06d163bed2e3047fca\" returns successfully" Sep 10 00:21:16.125069 systemd[1]: cri-containerd-a22a81b6a2de183dd7f7c2d33795a89ec1e648c3f8ee1c06d163bed2e3047fca.scope: Deactivated successfully. Sep 10 00:21:16.142976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a22a81b6a2de183dd7f7c2d33795a89ec1e648c3f8ee1c06d163bed2e3047fca-rootfs.mount: Deactivated successfully. Sep 10 00:21:16.205120 containerd[1438]: time="2025-09-10T00:21:16.205033451Z" level=info msg="shim disconnected" id=a22a81b6a2de183dd7f7c2d33795a89ec1e648c3f8ee1c06d163bed2e3047fca namespace=k8s.io Sep 10 00:21:16.205503 containerd[1438]: time="2025-09-10T00:21:16.205320788Z" level=warning msg="cleaning up after shim disconnected" id=a22a81b6a2de183dd7f7c2d33795a89ec1e648c3f8ee1c06d163bed2e3047fca namespace=k8s.io Sep 10 00:21:16.205503 containerd[1438]: time="2025-09-10T00:21:16.205337229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:21:16.212957 kubelet[2466]: I0910 00:21:16.212128 2466 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 00:21:16.251720 systemd[1]: Created slice kubepods-besteffort-pod53dbb309_ea6d_426a_a348_2c871575cb3d.slice - libcontainer container kubepods-besteffort-pod53dbb309_ea6d_426a_a348_2c871575cb3d.slice. Sep 10 00:21:16.261436 systemd[1]: Created slice kubepods-burstable-pode347e663_fe05_4ba7_b6ca_d630372bf51c.slice - libcontainer container kubepods-burstable-pode347e663_fe05_4ba7_b6ca_d630372bf51c.slice. Sep 10 00:21:16.266592 systemd[1]: Created slice kubepods-besteffort-pod85708529_bae5_41c2_aae8_a8dcb5a3de9c.slice - libcontainer container kubepods-besteffort-pod85708529_bae5_41c2_aae8_a8dcb5a3de9c.slice. Sep 10 00:21:16.272934 kubelet[2466]: I0910 00:21:16.272899 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwwnn\" (UniqueName: \"kubernetes.io/projected/bbdc19e7-71b6-4a83-8e74-efca458fc0cb-kube-api-access-hwwnn\") pod \"coredns-668d6bf9bc-4mrqt\" (UID: \"bbdc19e7-71b6-4a83-8e74-efca458fc0cb\") " pod="kube-system/coredns-668d6bf9bc-4mrqt" Sep 10 00:21:16.273020 kubelet[2466]: I0910 00:21:16.272936 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/375bc444-bbf5-4839-b395-b0f406ed06db-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-x5mwf\" (UID: \"375bc444-bbf5-4839-b395-b0f406ed06db\") " pod="calico-system/goldmane-54d579b49d-x5mwf" Sep 10 00:21:16.273020 kubelet[2466]: I0910 00:21:16.272956 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/375bc444-bbf5-4839-b395-b0f406ed06db-goldmane-key-pair\") pod \"goldmane-54d579b49d-x5mwf\" (UID: \"375bc444-bbf5-4839-b395-b0f406ed06db\") " pod="calico-system/goldmane-54d579b49d-x5mwf" Sep 10 00:21:16.273020 kubelet[2466]: I0910 00:21:16.272974 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4e6abdb0-8540-4e43-a17d-9abbd4d6ede7-calico-apiserver-certs\") pod \"calico-apiserver-9b45cc59c-f9r2t\" (UID: \"4e6abdb0-8540-4e43-a17d-9abbd4d6ede7\") " pod="calico-apiserver/calico-apiserver-9b45cc59c-f9r2t" Sep 10 00:21:16.273020 kubelet[2466]: I0910 00:21:16.272990 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/375bc444-bbf5-4839-b395-b0f406ed06db-config\") pod \"goldmane-54d579b49d-x5mwf\" (UID: \"375bc444-bbf5-4839-b395-b0f406ed06db\") " pod="calico-system/goldmane-54d579b49d-x5mwf" Sep 10 00:21:16.273020 kubelet[2466]: I0910 00:21:16.273007 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/53dbb309-ea6d-426a-a348-2c871575cb3d-whisker-backend-key-pair\") pod \"whisker-566897bdb5-x2xlw\" (UID: \"53dbb309-ea6d-426a-a348-2c871575cb3d\") " pod="calico-system/whisker-566897bdb5-x2xlw" Sep 10 00:21:16.273185 kubelet[2466]: I0910 00:21:16.273022 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4f46fd77-2fed-494f-b2c2-daf32e135470-calico-apiserver-certs\") pod \"calico-apiserver-9b45cc59c-mrqjs\" (UID: \"4f46fd77-2fed-494f-b2c2-daf32e135470\") " pod="calico-apiserver/calico-apiserver-9b45cc59c-mrqjs" Sep 10 00:21:16.273185 kubelet[2466]: I0910 00:21:16.273038 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg9cm\" (UniqueName: \"kubernetes.io/projected/4f46fd77-2fed-494f-b2c2-daf32e135470-kube-api-access-cg9cm\") pod \"calico-apiserver-9b45cc59c-mrqjs\" (UID: \"4f46fd77-2fed-494f-b2c2-daf32e135470\") " pod="calico-apiserver/calico-apiserver-9b45cc59c-mrqjs" Sep 10 00:21:16.273185 kubelet[2466]: I0910 00:21:16.273068 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zkkj\" (UniqueName: \"kubernetes.io/projected/375bc444-bbf5-4839-b395-b0f406ed06db-kube-api-access-8zkkj\") pod \"goldmane-54d579b49d-x5mwf\" (UID: \"375bc444-bbf5-4839-b395-b0f406ed06db\") " pod="calico-system/goldmane-54d579b49d-x5mwf" Sep 10 00:21:16.273185 kubelet[2466]: I0910 00:21:16.273088 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbdc19e7-71b6-4a83-8e74-efca458fc0cb-config-volume\") pod \"coredns-668d6bf9bc-4mrqt\" (UID: \"bbdc19e7-71b6-4a83-8e74-efca458fc0cb\") " pod="kube-system/coredns-668d6bf9bc-4mrqt" Sep 10 00:21:16.273185 kubelet[2466]: I0910 00:21:16.273104 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hkxj\" (UniqueName: \"kubernetes.io/projected/e347e663-fe05-4ba7-b6ca-d630372bf51c-kube-api-access-2hkxj\") pod \"coredns-668d6bf9bc-tlwmv\" (UID: \"e347e663-fe05-4ba7-b6ca-d630372bf51c\") " pod="kube-system/coredns-668d6bf9bc-tlwmv" Sep 10 00:21:16.273300 kubelet[2466]: I0910 00:21:16.273128 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfmhp\" (UniqueName: \"kubernetes.io/projected/53dbb309-ea6d-426a-a348-2c871575cb3d-kube-api-access-qfmhp\") pod \"whisker-566897bdb5-x2xlw\" (UID: \"53dbb309-ea6d-426a-a348-2c871575cb3d\") " pod="calico-system/whisker-566897bdb5-x2xlw" Sep 10 00:21:16.273300 kubelet[2466]: I0910 00:21:16.273147 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nfjm\" (UniqueName: \"kubernetes.io/projected/4e6abdb0-8540-4e43-a17d-9abbd4d6ede7-kube-api-access-4nfjm\") pod \"calico-apiserver-9b45cc59c-f9r2t\" (UID: \"4e6abdb0-8540-4e43-a17d-9abbd4d6ede7\") " pod="calico-apiserver/calico-apiserver-9b45cc59c-f9r2t" Sep 10 00:21:16.273300 kubelet[2466]: I0910 00:21:16.273164 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e347e663-fe05-4ba7-b6ca-d630372bf51c-config-volume\") pod \"coredns-668d6bf9bc-tlwmv\" (UID: \"e347e663-fe05-4ba7-b6ca-d630372bf51c\") " pod="kube-system/coredns-668d6bf9bc-tlwmv" Sep 10 00:21:16.273300 kubelet[2466]: I0910 00:21:16.273181 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53dbb309-ea6d-426a-a348-2c871575cb3d-whisker-ca-bundle\") pod \"whisker-566897bdb5-x2xlw\" (UID: \"53dbb309-ea6d-426a-a348-2c871575cb3d\") " pod="calico-system/whisker-566897bdb5-x2xlw" Sep 10 00:21:16.273300 kubelet[2466]: I0910 00:21:16.273197 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85708529-bae5-41c2-aae8-a8dcb5a3de9c-tigera-ca-bundle\") pod \"calico-kube-controllers-844fff9f5c-zklz7\" (UID: \"85708529-bae5-41c2-aae8-a8dcb5a3de9c\") " pod="calico-system/calico-kube-controllers-844fff9f5c-zklz7" Sep 10 00:21:16.273410 kubelet[2466]: I0910 00:21:16.273212 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2hnk\" (UniqueName: \"kubernetes.io/projected/85708529-bae5-41c2-aae8-a8dcb5a3de9c-kube-api-access-q2hnk\") pod \"calico-kube-controllers-844fff9f5c-zklz7\" (UID: \"85708529-bae5-41c2-aae8-a8dcb5a3de9c\") " pod="calico-system/calico-kube-controllers-844fff9f5c-zklz7" Sep 10 00:21:16.273943 systemd[1]: Created slice kubepods-burstable-podbbdc19e7_71b6_4a83_8e74_efca458fc0cb.slice - libcontainer container kubepods-burstable-podbbdc19e7_71b6_4a83_8e74_efca458fc0cb.slice. Sep 10 00:21:16.295859 systemd[1]: Created slice kubepods-besteffort-pod4e6abdb0_8540_4e43_a17d_9abbd4d6ede7.slice - libcontainer container kubepods-besteffort-pod4e6abdb0_8540_4e43_a17d_9abbd4d6ede7.slice. Sep 10 00:21:16.304799 systemd[1]: Created slice kubepods-besteffort-pod4f46fd77_2fed_494f_b2c2_daf32e135470.slice - libcontainer container kubepods-besteffort-pod4f46fd77_2fed_494f_b2c2_daf32e135470.slice. Sep 10 00:21:16.310007 systemd[1]: Created slice kubepods-besteffort-pod375bc444_bbf5_4839_b395_b0f406ed06db.slice - libcontainer container kubepods-besteffort-pod375bc444_bbf5_4839_b395_b0f406ed06db.slice. Sep 10 00:21:16.397403 systemd[1]: Created slice kubepods-besteffort-podc0d446ad_5074_4566_b66e_22a6ab7ca731.slice - libcontainer container kubepods-besteffort-podc0d446ad_5074_4566_b66e_22a6ab7ca731.slice. Sep 10 00:21:16.400639 containerd[1438]: time="2025-09-10T00:21:16.400497513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2dhv8,Uid:c0d446ad-5074-4566-b66e-22a6ab7ca731,Namespace:calico-system,Attempt:0,}" Sep 10 00:21:16.472022 containerd[1438]: time="2025-09-10T00:21:16.471973402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 00:21:16.494334 containerd[1438]: time="2025-09-10T00:21:16.494288438Z" level=error msg="Failed to destroy network for sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.494627 containerd[1438]: time="2025-09-10T00:21:16.494602496Z" level=error msg="encountered an error cleaning up failed sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.494673 containerd[1438]: time="2025-09-10T00:21:16.494654179Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2dhv8,Uid:c0d446ad-5074-4566-b66e-22a6ab7ca731,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.494891 kubelet[2466]: E0910 00:21:16.494856 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.497172 kubelet[2466]: E0910 00:21:16.497135 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2dhv8" Sep 10 00:21:16.497226 kubelet[2466]: E0910 00:21:16.497177 2466 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2dhv8" Sep 10 00:21:16.497265 kubelet[2466]: E0910 00:21:16.497236 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2dhv8_calico-system(c0d446ad-5074-4566-b66e-22a6ab7ca731)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2dhv8_calico-system(c0d446ad-5074-4566-b66e-22a6ab7ca731)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2dhv8" podUID="c0d446ad-5074-4566-b66e-22a6ab7ca731" Sep 10 00:21:16.563450 containerd[1438]: time="2025-09-10T00:21:16.563398992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-566897bdb5-x2xlw,Uid:53dbb309-ea6d-426a-a348-2c871575cb3d,Namespace:calico-system,Attempt:0,}" Sep 10 00:21:16.566743 kubelet[2466]: E0910 00:21:16.566706 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:16.567606 containerd[1438]: time="2025-09-10T00:21:16.567257973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlwmv,Uid:e347e663-fe05-4ba7-b6ca-d630372bf51c,Namespace:kube-system,Attempt:0,}" Sep 10 00:21:16.571084 containerd[1438]: time="2025-09-10T00:21:16.570462356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844fff9f5c-zklz7,Uid:85708529-bae5-41c2-aae8-a8dcb5a3de9c,Namespace:calico-system,Attempt:0,}" Sep 10 00:21:16.579705 kubelet[2466]: E0910 00:21:16.579661 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:16.580794 containerd[1438]: time="2025-09-10T00:21:16.580313960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4mrqt,Uid:bbdc19e7-71b6-4a83-8e74-efca458fc0cb,Namespace:kube-system,Attempt:0,}" Sep 10 00:21:16.607520 containerd[1438]: time="2025-09-10T00:21:16.607472433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b45cc59c-f9r2t,Uid:4e6abdb0-8540-4e43-a17d-9abbd4d6ede7,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:21:16.610241 containerd[1438]: time="2025-09-10T00:21:16.610173588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b45cc59c-mrqjs,Uid:4f46fd77-2fed-494f-b2c2-daf32e135470,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:21:16.613064 containerd[1438]: time="2025-09-10T00:21:16.613014470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x5mwf,Uid:375bc444-bbf5-4839-b395-b0f406ed06db,Namespace:calico-system,Attempt:0,}" Sep 10 00:21:16.660503 containerd[1438]: time="2025-09-10T00:21:16.660292175Z" level=error msg="Failed to destroy network for sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.662121 containerd[1438]: time="2025-09-10T00:21:16.662042035Z" level=error msg="encountered an error cleaning up failed sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.662204 containerd[1438]: time="2025-09-10T00:21:16.662119839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-566897bdb5-x2xlw,Uid:53dbb309-ea6d-426a-a348-2c871575cb3d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.662418 kubelet[2466]: E0910 00:21:16.662328 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.662418 kubelet[2466]: E0910 00:21:16.662394 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-566897bdb5-x2xlw" Sep 10 00:21:16.662418 kubelet[2466]: E0910 00:21:16.662413 2466 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-566897bdb5-x2xlw" Sep 10 00:21:16.662837 kubelet[2466]: E0910 00:21:16.662464 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-566897bdb5-x2xlw_calico-system(53dbb309-ea6d-426a-a348-2c871575cb3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-566897bdb5-x2xlw_calico-system(53dbb309-ea6d-426a-a348-2c871575cb3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-566897bdb5-x2xlw" podUID="53dbb309-ea6d-426a-a348-2c871575cb3d" Sep 10 00:21:16.681094 containerd[1438]: time="2025-09-10T00:21:16.680707103Z" level=error msg="Failed to destroy network for sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.681603 containerd[1438]: time="2025-09-10T00:21:16.681556151Z" level=error msg="encountered an error cleaning up failed sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.681663 containerd[1438]: time="2025-09-10T00:21:16.681614635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlwmv,Uid:e347e663-fe05-4ba7-b6ca-d630372bf51c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.681893 kubelet[2466]: E0910 00:21:16.681857 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.682003 kubelet[2466]: E0910 00:21:16.681986 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tlwmv" Sep 10 00:21:16.682109 kubelet[2466]: E0910 00:21:16.682073 2466 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-tlwmv" Sep 10 00:21:16.682225 kubelet[2466]: E0910 00:21:16.682182 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-tlwmv_kube-system(e347e663-fe05-4ba7-b6ca-d630372bf51c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-tlwmv_kube-system(e347e663-fe05-4ba7-b6ca-d630372bf51c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tlwmv" podUID="e347e663-fe05-4ba7-b6ca-d630372bf51c" Sep 10 00:21:16.694963 containerd[1438]: time="2025-09-10T00:21:16.694919356Z" level=error msg="Failed to destroy network for sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.696527 containerd[1438]: time="2025-09-10T00:21:16.696349557Z" level=error msg="encountered an error cleaning up failed sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.696527 containerd[1438]: time="2025-09-10T00:21:16.696413721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844fff9f5c-zklz7,Uid:85708529-bae5-41c2-aae8-a8dcb5a3de9c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.697219 kubelet[2466]: E0910 00:21:16.696827 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.697219 kubelet[2466]: E0910 00:21:16.696889 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-844fff9f5c-zklz7" Sep 10 00:21:16.697219 kubelet[2466]: E0910 00:21:16.696919 2466 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-844fff9f5c-zklz7" Sep 10 00:21:16.697405 kubelet[2466]: E0910 00:21:16.696958 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-844fff9f5c-zklz7_calico-system(85708529-bae5-41c2-aae8-a8dcb5a3de9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-844fff9f5c-zklz7_calico-system(85708529-bae5-41c2-aae8-a8dcb5a3de9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-844fff9f5c-zklz7" podUID="85708529-bae5-41c2-aae8-a8dcb5a3de9c" Sep 10 00:21:16.699221 containerd[1438]: time="2025-09-10T00:21:16.699179039Z" level=error msg="Failed to destroy network for sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.699656 containerd[1438]: time="2025-09-10T00:21:16.699622105Z" level=error msg="encountered an error cleaning up failed sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.699728 containerd[1438]: time="2025-09-10T00:21:16.699678228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4mrqt,Uid:bbdc19e7-71b6-4a83-8e74-efca458fc0cb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.700479 kubelet[2466]: E0910 00:21:16.700183 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.700479 kubelet[2466]: E0910 00:21:16.700278 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4mrqt" Sep 10 00:21:16.700479 kubelet[2466]: E0910 00:21:16.700313 2466 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4mrqt" Sep 10 00:21:16.700612 kubelet[2466]: E0910 00:21:16.700358 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4mrqt_kube-system(bbdc19e7-71b6-4a83-8e74-efca458fc0cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4mrqt_kube-system(bbdc19e7-71b6-4a83-8e74-efca458fc0cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4mrqt" podUID="bbdc19e7-71b6-4a83-8e74-efca458fc0cb" Sep 10 00:21:16.728263 containerd[1438]: time="2025-09-10T00:21:16.728193099Z" level=error msg="Failed to destroy network for sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.728590 containerd[1438]: time="2025-09-10T00:21:16.728564680Z" level=error msg="encountered an error cleaning up failed sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.728654 containerd[1438]: time="2025-09-10T00:21:16.728633924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x5mwf,Uid:375bc444-bbf5-4839-b395-b0f406ed06db,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.729083 kubelet[2466]: E0910 00:21:16.728864 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.729083 kubelet[2466]: E0910 00:21:16.728924 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-x5mwf" Sep 10 00:21:16.729083 kubelet[2466]: E0910 00:21:16.728947 2466 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-x5mwf" Sep 10 00:21:16.729212 kubelet[2466]: E0910 00:21:16.728992 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-x5mwf_calico-system(375bc444-bbf5-4839-b395-b0f406ed06db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-x5mwf_calico-system(375bc444-bbf5-4839-b395-b0f406ed06db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-x5mwf" podUID="375bc444-bbf5-4839-b395-b0f406ed06db" Sep 10 00:21:16.730222 containerd[1438]: time="2025-09-10T00:21:16.730153531Z" level=error msg="Failed to destroy network for sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.730862 containerd[1438]: time="2025-09-10T00:21:16.730677401Z" level=error msg="encountered an error cleaning up failed sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.731193 containerd[1438]: time="2025-09-10T00:21:16.731020901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b45cc59c-f9r2t,Uid:4e6abdb0-8540-4e43-a17d-9abbd4d6ede7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.731520 kubelet[2466]: E0910 00:21:16.731431 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.731520 kubelet[2466]: E0910 00:21:16.731484 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b45cc59c-f9r2t" Sep 10 00:21:16.731520 kubelet[2466]: E0910 00:21:16.731506 2466 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b45cc59c-f9r2t" Sep 10 00:21:16.731637 kubelet[2466]: E0910 00:21:16.731543 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b45cc59c-f9r2t_calico-apiserver(4e6abdb0-8540-4e43-a17d-9abbd4d6ede7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b45cc59c-f9r2t_calico-apiserver(4e6abdb0-8540-4e43-a17d-9abbd4d6ede7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b45cc59c-f9r2t" podUID="4e6abdb0-8540-4e43-a17d-9abbd4d6ede7" Sep 10 00:21:16.738443 containerd[1438]: time="2025-09-10T00:21:16.738324039Z" level=error msg="Failed to destroy network for sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.739653 containerd[1438]: time="2025-09-10T00:21:16.739510587Z" level=error msg="encountered an error cleaning up failed sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.739653 containerd[1438]: time="2025-09-10T00:21:16.739580151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b45cc59c-mrqjs,Uid:4f46fd77-2fed-494f-b2c2-daf32e135470,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.739872 kubelet[2466]: E0910 00:21:16.739831 2466 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:16.739946 kubelet[2466]: E0910 00:21:16.739894 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b45cc59c-mrqjs" Sep 10 00:21:16.739946 kubelet[2466]: E0910 00:21:16.739914 2466 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b45cc59c-mrqjs" Sep 10 00:21:16.739995 kubelet[2466]: E0910 00:21:16.739955 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b45cc59c-mrqjs_calico-apiserver(4f46fd77-2fed-494f-b2c2-daf32e135470)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b45cc59c-mrqjs_calico-apiserver(4f46fd77-2fed-494f-b2c2-daf32e135470)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b45cc59c-mrqjs" podUID="4f46fd77-2fed-494f-b2c2-daf32e135470" Sep 10 00:21:17.474912 containerd[1438]: time="2025-09-10T00:21:17.474540831Z" level=info msg="StopPodSandbox for \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\"" Sep 10 00:21:17.474912 containerd[1438]: time="2025-09-10T00:21:17.474698000Z" level=info msg="Ensure that sandbox 2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5 in task-service has been cleanup successfully" Sep 10 00:21:17.479451 kubelet[2466]: I0910 00:21:17.479427 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:17.480060 kubelet[2466]: I0910 00:21:17.479805 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:17.480060 kubelet[2466]: I0910 00:21:17.479855 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:17.481103 containerd[1438]: time="2025-09-10T00:21:17.480627246Z" level=info msg="StopPodSandbox for \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\"" Sep 10 00:21:17.481103 containerd[1438]: time="2025-09-10T00:21:17.480781334Z" level=info msg="StopPodSandbox for \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\"" Sep 10 00:21:17.481103 containerd[1438]: time="2025-09-10T00:21:17.480791175Z" level=info msg="Ensure that sandbox 7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b in task-service has been cleanup successfully" Sep 10 00:21:17.481103 containerd[1438]: time="2025-09-10T00:21:17.480907781Z" level=info msg="Ensure that sandbox 9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d in task-service has been cleanup successfully" Sep 10 00:21:17.481587 kubelet[2466]: I0910 00:21:17.481560 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:17.482273 containerd[1438]: time="2025-09-10T00:21:17.482243214Z" level=info msg="StopPodSandbox for \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\"" Sep 10 00:21:17.485890 containerd[1438]: time="2025-09-10T00:21:17.485845932Z" level=info msg="Ensure that sandbox 9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7 in task-service has been cleanup successfully" Sep 10 00:21:17.490141 kubelet[2466]: I0910 00:21:17.490108 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:17.495355 containerd[1438]: time="2025-09-10T00:21:17.495310212Z" level=info msg="StopPodSandbox for \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\"" Sep 10 00:21:17.499024 containerd[1438]: time="2025-09-10T00:21:17.498978853Z" level=info msg="Ensure that sandbox 6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe in task-service has been cleanup successfully" Sep 10 00:21:17.499103 kubelet[2466]: I0910 00:21:17.499069 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:17.502624 containerd[1438]: time="2025-09-10T00:21:17.502592651Z" level=info msg="StopPodSandbox for \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\"" Sep 10 00:21:17.503680 containerd[1438]: time="2025-09-10T00:21:17.503650750Z" level=info msg="Ensure that sandbox f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793 in task-service has been cleanup successfully" Sep 10 00:21:17.504609 kubelet[2466]: I0910 00:21:17.504129 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:17.506822 containerd[1438]: time="2025-09-10T00:21:17.506789962Z" level=info msg="StopPodSandbox for \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\"" Sep 10 00:21:17.508074 containerd[1438]: time="2025-09-10T00:21:17.507832499Z" level=info msg="Ensure that sandbox 3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961 in task-service has been cleanup successfully" Sep 10 00:21:17.511872 kubelet[2466]: I0910 00:21:17.511474 2466 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:17.515407 containerd[1438]: time="2025-09-10T00:21:17.515370153Z" level=error msg="StopPodSandbox for \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\" failed" error="failed to destroy network for sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:17.515897 containerd[1438]: time="2025-09-10T00:21:17.515418716Z" level=info msg="StopPodSandbox for \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\"" Sep 10 00:21:17.516121 containerd[1438]: time="2025-09-10T00:21:17.516101313Z" level=info msg="Ensure that sandbox e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1 in task-service has been cleanup successfully" Sep 10 00:21:17.516302 kubelet[2466]: E0910 00:21:17.516271 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:17.519760 kubelet[2466]: E0910 00:21:17.519694 2466 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5"} Sep 10 00:21:17.519888 kubelet[2466]: E0910 00:21:17.519871 2466 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bbdc19e7-71b6-4a83-8e74-efca458fc0cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:21:17.520010 kubelet[2466]: E0910 00:21:17.519987 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bbdc19e7-71b6-4a83-8e74-efca458fc0cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4mrqt" podUID="bbdc19e7-71b6-4a83-8e74-efca458fc0cb" Sep 10 00:21:17.550263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d-shm.mount: Deactivated successfully. Sep 10 00:21:17.550647 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe-shm.mount: Deactivated successfully. Sep 10 00:21:17.550867 containerd[1438]: time="2025-09-10T00:21:17.550184264Z" level=error msg="StopPodSandbox for \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\" failed" error="failed to destroy network for sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:17.552038 kubelet[2466]: E0910 00:21:17.551896 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:17.552038 kubelet[2466]: E0910 00:21:17.551951 2466 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7"} Sep 10 00:21:17.552038 kubelet[2466]: E0910 00:21:17.551982 2466 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c0d446ad-5074-4566-b66e-22a6ab7ca731\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:21:17.552038 kubelet[2466]: E0910 00:21:17.552002 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c0d446ad-5074-4566-b66e-22a6ab7ca731\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2dhv8" podUID="c0d446ad-5074-4566-b66e-22a6ab7ca731" Sep 10 00:21:17.560267 containerd[1438]: time="2025-09-10T00:21:17.559894557Z" level=error msg="StopPodSandbox for \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\" failed" error="failed to destroy network for sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:17.560378 kubelet[2466]: E0910 00:21:17.560134 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:17.560378 kubelet[2466]: E0910 00:21:17.560179 2466 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d"} Sep 10 00:21:17.560378 kubelet[2466]: E0910 00:21:17.560212 2466 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e347e663-fe05-4ba7-b6ca-d630372bf51c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:21:17.560378 kubelet[2466]: E0910 00:21:17.560233 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e347e663-fe05-4ba7-b6ca-d630372bf51c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-tlwmv" podUID="e347e663-fe05-4ba7-b6ca-d630372bf51c" Sep 10 00:21:17.562795 containerd[1438]: time="2025-09-10T00:21:17.562629108Z" level=error msg="StopPodSandbox for \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\" failed" error="failed to destroy network for sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:17.563110 kubelet[2466]: E0910 00:21:17.562966 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:17.563110 kubelet[2466]: E0910 00:21:17.563017 2466 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b"} Sep 10 00:21:17.563110 kubelet[2466]: E0910 00:21:17.563065 2466 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85708529-bae5-41c2-aae8-a8dcb5a3de9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:21:17.563110 kubelet[2466]: E0910 00:21:17.563085 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85708529-bae5-41c2-aae8-a8dcb5a3de9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-844fff9f5c-zklz7" podUID="85708529-bae5-41c2-aae8-a8dcb5a3de9c" Sep 10 00:21:17.569966 containerd[1438]: time="2025-09-10T00:21:17.569870705Z" level=error msg="StopPodSandbox for \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\" failed" error="failed to destroy network for sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:17.570332 kubelet[2466]: E0910 00:21:17.570067 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:17.570332 kubelet[2466]: E0910 00:21:17.570106 2466 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961"} Sep 10 00:21:17.570332 kubelet[2466]: E0910 00:21:17.570133 2466 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4e6abdb0-8540-4e43-a17d-9abbd4d6ede7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:21:17.570332 kubelet[2466]: E0910 00:21:17.570151 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4e6abdb0-8540-4e43-a17d-9abbd4d6ede7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b45cc59c-f9r2t" podUID="4e6abdb0-8540-4e43-a17d-9abbd4d6ede7" Sep 10 00:21:17.571765 containerd[1438]: time="2025-09-10T00:21:17.571720047Z" level=error msg="StopPodSandbox for \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\" failed" error="failed to destroy network for sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:17.572020 kubelet[2466]: E0910 00:21:17.571875 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:17.572020 kubelet[2466]: E0910 00:21:17.571905 2466 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe"} Sep 10 00:21:17.572020 kubelet[2466]: E0910 00:21:17.571937 2466 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53dbb309-ea6d-426a-a348-2c871575cb3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:21:17.572020 kubelet[2466]: E0910 00:21:17.571954 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53dbb309-ea6d-426a-a348-2c871575cb3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-566897bdb5-x2xlw" podUID="53dbb309-ea6d-426a-a348-2c871575cb3d" Sep 10 00:21:17.573433 containerd[1438]: time="2025-09-10T00:21:17.573398819Z" level=error msg="StopPodSandbox for \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\" failed" error="failed to destroy network for sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:17.573565 kubelet[2466]: E0910 00:21:17.573537 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:17.573612 kubelet[2466]: E0910 00:21:17.573574 2466 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1"} Sep 10 00:21:17.573612 kubelet[2466]: E0910 00:21:17.573598 2466 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"375bc444-bbf5-4839-b395-b0f406ed06db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:21:17.573674 kubelet[2466]: E0910 00:21:17.573614 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"375bc444-bbf5-4839-b395-b0f406ed06db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-x5mwf" podUID="375bc444-bbf5-4839-b395-b0f406ed06db" Sep 10 00:21:17.578273 containerd[1438]: time="2025-09-10T00:21:17.578229764Z" level=error msg="StopPodSandbox for \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\" failed" error="failed to destroy network for sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:21:17.578580 kubelet[2466]: E0910 00:21:17.578549 2466 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:17.578619 kubelet[2466]: E0910 00:21:17.578589 2466 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793"} Sep 10 00:21:17.578650 kubelet[2466]: E0910 00:21:17.578614 2466 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f46fd77-2fed-494f-b2c2-daf32e135470\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:21:17.578650 kubelet[2466]: E0910 00:21:17.578634 2466 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f46fd77-2fed-494f-b2c2-daf32e135470\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b45cc59c-mrqjs" podUID="4f46fd77-2fed-494f-b2c2-daf32e135470" Sep 10 00:21:20.258962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2210539623.mount: Deactivated successfully. Sep 10 00:21:20.550346 containerd[1438]: time="2025-09-10T00:21:20.550187602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 00:21:20.564510 containerd[1438]: time="2025-09-10T00:21:20.563830388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:20.568278 containerd[1438]: time="2025-09-10T00:21:20.568235483Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:20.570824 containerd[1438]: time="2025-09-10T00:21:20.570719204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:20.571749 containerd[1438]: time="2025-09-10T00:21:20.571443079Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.099428995s" Sep 10 00:21:20.571749 containerd[1438]: time="2025-09-10T00:21:20.571475601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 00:21:20.582350 containerd[1438]: time="2025-09-10T00:21:20.579702242Z" level=info msg="CreateContainer within sandbox \"67e5e1a8bee4a41b5aa6a217bc139fe0c447fb8481f0af37b37cb219a54ba1c9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 00:21:20.615334 containerd[1438]: time="2025-09-10T00:21:20.615289140Z" level=info msg="CreateContainer within sandbox \"67e5e1a8bee4a41b5aa6a217bc139fe0c447fb8481f0af37b37cb219a54ba1c9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"214672ed3fe18eeab06dd6511e88ddb691195a2f48ab9c765764920d569a1c5c\"" Sep 10 00:21:20.615921 containerd[1438]: time="2025-09-10T00:21:20.615894569Z" level=info msg="StartContainer for \"214672ed3fe18eeab06dd6511e88ddb691195a2f48ab9c765764920d569a1c5c\"" Sep 10 00:21:20.665195 systemd[1]: Started cri-containerd-214672ed3fe18eeab06dd6511e88ddb691195a2f48ab9c765764920d569a1c5c.scope - libcontainer container 214672ed3fe18eeab06dd6511e88ddb691195a2f48ab9c765764920d569a1c5c. Sep 10 00:21:20.698404 containerd[1438]: time="2025-09-10T00:21:20.698351875Z" level=info msg="StartContainer for \"214672ed3fe18eeab06dd6511e88ddb691195a2f48ab9c765764920d569a1c5c\" returns successfully" Sep 10 00:21:20.814353 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 00:21:20.814504 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 00:21:20.911972 containerd[1438]: time="2025-09-10T00:21:20.911925701Z" level=info msg="StopPodSandbox for \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\"" Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.015 [INFO][3803] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.016 [INFO][3803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" iface="eth0" netns="/var/run/netns/cni-40ed057a-285b-ea95-b415-f009379fc802" Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.016 [INFO][3803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" iface="eth0" netns="/var/run/netns/cni-40ed057a-285b-ea95-b415-f009379fc802" Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.017 [INFO][3803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" iface="eth0" netns="/var/run/netns/cni-40ed057a-285b-ea95-b415-f009379fc802" Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.017 [INFO][3803] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.017 [INFO][3803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.093 [INFO][3820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" HandleID="k8s-pod-network.6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Workload="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.093 [INFO][3820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.093 [INFO][3820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.104 [WARNING][3820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" HandleID="k8s-pod-network.6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Workload="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.104 [INFO][3820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" HandleID="k8s-pod-network.6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Workload="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.108 [INFO][3820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:21.113881 containerd[1438]: 2025-09-10 00:21:21.111 [INFO][3803] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:21.114418 containerd[1438]: time="2025-09-10T00:21:21.114016685Z" level=info msg="TearDown network for sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\" successfully" Sep 10 00:21:21.114418 containerd[1438]: time="2025-09-10T00:21:21.114043486Z" level=info msg="StopPodSandbox for \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\" returns successfully" Sep 10 00:21:21.208317 kubelet[2466]: I0910 00:21:21.208195 2466 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53dbb309-ea6d-426a-a348-2c871575cb3d-whisker-ca-bundle\") pod \"53dbb309-ea6d-426a-a348-2c871575cb3d\" (UID: \"53dbb309-ea6d-426a-a348-2c871575cb3d\") " Sep 10 00:21:21.208317 kubelet[2466]: I0910 00:21:21.208252 2466 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/53dbb309-ea6d-426a-a348-2c871575cb3d-whisker-backend-key-pair\") pod \"53dbb309-ea6d-426a-a348-2c871575cb3d\" (UID: \"53dbb309-ea6d-426a-a348-2c871575cb3d\") " Sep 10 00:21:21.208317 kubelet[2466]: I0910 00:21:21.208276 2466 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qfmhp\" (UniqueName: \"kubernetes.io/projected/53dbb309-ea6d-426a-a348-2c871575cb3d-kube-api-access-qfmhp\") pod \"53dbb309-ea6d-426a-a348-2c871575cb3d\" (UID: \"53dbb309-ea6d-426a-a348-2c871575cb3d\") " Sep 10 00:21:21.219017 kubelet[2466]: I0910 00:21:21.218984 2466 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53dbb309-ea6d-426a-a348-2c871575cb3d-kube-api-access-qfmhp" (OuterVolumeSpecName: "kube-api-access-qfmhp") pod "53dbb309-ea6d-426a-a348-2c871575cb3d" (UID: "53dbb309-ea6d-426a-a348-2c871575cb3d"). InnerVolumeSpecName "kube-api-access-qfmhp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:21:21.219220 kubelet[2466]: I0910 00:21:21.218987 2466 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53dbb309-ea6d-426a-a348-2c871575cb3d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "53dbb309-ea6d-426a-a348-2c871575cb3d" (UID: "53dbb309-ea6d-426a-a348-2c871575cb3d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 00:21:21.219615 kubelet[2466]: I0910 00:21:21.219589 2466 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53dbb309-ea6d-426a-a348-2c871575cb3d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "53dbb309-ea6d-426a-a348-2c871575cb3d" (UID: "53dbb309-ea6d-426a-a348-2c871575cb3d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 00:21:21.259145 systemd[1]: run-netns-cni\x2d40ed057a\x2d285b\x2dea95\x2db415\x2df009379fc802.mount: Deactivated successfully. Sep 10 00:21:21.259239 systemd[1]: var-lib-kubelet-pods-53dbb309\x2dea6d\x2d426a\x2da348\x2d2c871575cb3d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqfmhp.mount: Deactivated successfully. Sep 10 00:21:21.259292 systemd[1]: var-lib-kubelet-pods-53dbb309\x2dea6d\x2d426a\x2da348\x2d2c871575cb3d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 00:21:21.309425 kubelet[2466]: I0910 00:21:21.309383 2466 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/53dbb309-ea6d-426a-a348-2c871575cb3d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 00:21:21.309425 kubelet[2466]: I0910 00:21:21.309419 2466 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qfmhp\" (UniqueName: \"kubernetes.io/projected/53dbb309-ea6d-426a-a348-2c871575cb3d-kube-api-access-qfmhp\") on node \"localhost\" DevicePath \"\"" Sep 10 00:21:21.309425 kubelet[2466]: I0910 00:21:21.309429 2466 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53dbb309-ea6d-426a-a348-2c871575cb3d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 00:21:21.403492 systemd[1]: Removed slice kubepods-besteffort-pod53dbb309_ea6d_426a_a348_2c871575cb3d.slice - libcontainer container kubepods-besteffort-pod53dbb309_ea6d_426a_a348_2c871575cb3d.slice. Sep 10 00:21:21.570732 kubelet[2466]: I0910 00:21:21.570333 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-h7xfz" podStartSLOduration=1.976129452 podStartE2EDuration="13.570316028s" podCreationTimestamp="2025-09-10 00:21:08 +0000 UTC" firstStartedPulling="2025-09-10 00:21:08.978254952 +0000 UTC m=+19.676374998" lastFinishedPulling="2025-09-10 00:21:20.572441528 +0000 UTC m=+31.270561574" observedRunningTime="2025-09-10 00:21:21.568422819 +0000 UTC m=+32.266542865" watchObservedRunningTime="2025-09-10 00:21:21.570316028 +0000 UTC m=+32.268436074" Sep 10 00:21:21.629172 systemd[1]: Created slice kubepods-besteffort-pode616f7e3_a964_477f_a230_73408b6c3bce.slice - libcontainer container kubepods-besteffort-pode616f7e3_a964_477f_a230_73408b6c3bce.slice. Sep 10 00:21:21.712672 kubelet[2466]: I0910 00:21:21.712558 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e616f7e3-a964-477f-a230-73408b6c3bce-whisker-backend-key-pair\") pod \"whisker-797fc48d96-htrjs\" (UID: \"e616f7e3-a964-477f-a230-73408b6c3bce\") " pod="calico-system/whisker-797fc48d96-htrjs" Sep 10 00:21:21.712672 kubelet[2466]: I0910 00:21:21.712611 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vb62\" (UniqueName: \"kubernetes.io/projected/e616f7e3-a964-477f-a230-73408b6c3bce-kube-api-access-9vb62\") pod \"whisker-797fc48d96-htrjs\" (UID: \"e616f7e3-a964-477f-a230-73408b6c3bce\") " pod="calico-system/whisker-797fc48d96-htrjs" Sep 10 00:21:21.712672 kubelet[2466]: I0910 00:21:21.712649 2466 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e616f7e3-a964-477f-a230-73408b6c3bce-whisker-ca-bundle\") pod \"whisker-797fc48d96-htrjs\" (UID: \"e616f7e3-a964-477f-a230-73408b6c3bce\") " pod="calico-system/whisker-797fc48d96-htrjs" Sep 10 00:21:21.933879 containerd[1438]: time="2025-09-10T00:21:21.933790805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-797fc48d96-htrjs,Uid:e616f7e3-a964-477f-a230-73408b6c3bce,Namespace:calico-system,Attempt:0,}" Sep 10 00:21:22.055451 systemd-networkd[1381]: cali519fecbb005: Link UP Sep 10 00:21:22.055929 systemd-networkd[1381]: cali519fecbb005: Gained carrier Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:21.966 [INFO][3844] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:21.983 [INFO][3844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--797fc48d96--htrjs-eth0 whisker-797fc48d96- calico-system e616f7e3-a964-477f-a230-73408b6c3bce 886 0 2025-09-10 00:21:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:797fc48d96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-797fc48d96-htrjs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali519fecbb005 [] [] }} ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Namespace="calico-system" Pod="whisker-797fc48d96-htrjs" WorkloadEndpoint="localhost-k8s-whisker--797fc48d96--htrjs-" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:21.983 [INFO][3844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Namespace="calico-system" Pod="whisker-797fc48d96-htrjs" WorkloadEndpoint="localhost-k8s-whisker--797fc48d96--htrjs-eth0" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.008 [INFO][3858] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" HandleID="k8s-pod-network.b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Workload="localhost-k8s-whisker--797fc48d96--htrjs-eth0" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.008 [INFO][3858] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" HandleID="k8s-pod-network.b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Workload="localhost-k8s-whisker--797fc48d96--htrjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-797fc48d96-htrjs", "timestamp":"2025-09-10 00:21:22.008674594 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.008 [INFO][3858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.008 [INFO][3858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.008 [INFO][3858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.019 [INFO][3858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" host="localhost" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.028 [INFO][3858] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.033 [INFO][3858] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.035 [INFO][3858] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.037 [INFO][3858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.037 [INFO][3858] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" host="localhost" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.038 [INFO][3858] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.042 [INFO][3858] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" host="localhost" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.047 [INFO][3858] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" host="localhost" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.047 [INFO][3858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" host="localhost" Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.047 [INFO][3858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:22.069357 containerd[1438]: 2025-09-10 00:21:22.047 [INFO][3858] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" HandleID="k8s-pod-network.b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Workload="localhost-k8s-whisker--797fc48d96--htrjs-eth0" Sep 10 00:21:22.070021 containerd[1438]: 2025-09-10 00:21:22.049 [INFO][3844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Namespace="calico-system" Pod="whisker-797fc48d96-htrjs" WorkloadEndpoint="localhost-k8s-whisker--797fc48d96--htrjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--797fc48d96--htrjs-eth0", GenerateName:"whisker-797fc48d96-", Namespace:"calico-system", SelfLink:"", UID:"e616f7e3-a964-477f-a230-73408b6c3bce", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"797fc48d96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-797fc48d96-htrjs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali519fecbb005", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:22.070021 containerd[1438]: 2025-09-10 00:21:22.050 [INFO][3844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Namespace="calico-system" Pod="whisker-797fc48d96-htrjs" WorkloadEndpoint="localhost-k8s-whisker--797fc48d96--htrjs-eth0" Sep 10 00:21:22.070021 containerd[1438]: 2025-09-10 00:21:22.050 [INFO][3844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali519fecbb005 ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Namespace="calico-system" Pod="whisker-797fc48d96-htrjs" WorkloadEndpoint="localhost-k8s-whisker--797fc48d96--htrjs-eth0" Sep 10 00:21:22.070021 containerd[1438]: 2025-09-10 00:21:22.056 [INFO][3844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Namespace="calico-system" Pod="whisker-797fc48d96-htrjs" WorkloadEndpoint="localhost-k8s-whisker--797fc48d96--htrjs-eth0" Sep 10 00:21:22.070021 containerd[1438]: 2025-09-10 00:21:22.056 [INFO][3844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Namespace="calico-system" Pod="whisker-797fc48d96-htrjs" WorkloadEndpoint="localhost-k8s-whisker--797fc48d96--htrjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--797fc48d96--htrjs-eth0", GenerateName:"whisker-797fc48d96-", Namespace:"calico-system", SelfLink:"", UID:"e616f7e3-a964-477f-a230-73408b6c3bce", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"797fc48d96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b", Pod:"whisker-797fc48d96-htrjs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali519fecbb005", MAC:"fa:60:97:89:c6:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:22.070021 containerd[1438]: 2025-09-10 00:21:22.067 [INFO][3844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b" Namespace="calico-system" Pod="whisker-797fc48d96-htrjs" WorkloadEndpoint="localhost-k8s-whisker--797fc48d96--htrjs-eth0" Sep 10 00:21:22.084559 containerd[1438]: time="2025-09-10T00:21:22.084169419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:21:22.084719 containerd[1438]: time="2025-09-10T00:21:22.084540636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:21:22.084719 containerd[1438]: time="2025-09-10T00:21:22.084553797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:22.084719 containerd[1438]: time="2025-09-10T00:21:22.084642441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:22.102235 systemd[1]: Started cri-containerd-b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b.scope - libcontainer container b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b. Sep 10 00:21:22.120515 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:21:22.137877 containerd[1438]: time="2025-09-10T00:21:22.137259028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-797fc48d96-htrjs,Uid:e616f7e3-a964-477f-a230-73408b6c3bce,Namespace:calico-system,Attempt:0,} returns sandbox id \"b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b\"" Sep 10 00:21:22.140327 containerd[1438]: time="2025-09-10T00:21:22.139298321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 00:21:22.390075 kernel: bpftool[4039]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 10 00:21:22.547644 kubelet[2466]: I0910 00:21:22.547589 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:21:22.560172 systemd-networkd[1381]: vxlan.calico: Link UP Sep 10 00:21:22.560179 systemd-networkd[1381]: vxlan.calico: Gained carrier Sep 10 00:21:23.198043 containerd[1438]: time="2025-09-10T00:21:23.198000204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:23.199294 containerd[1438]: time="2025-09-10T00:21:23.199263580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 00:21:23.201381 containerd[1438]: time="2025-09-10T00:21:23.201334950Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:23.204063 containerd[1438]: time="2025-09-10T00:21:23.204002347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:23.204750 containerd[1438]: time="2025-09-10T00:21:23.204717419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.065380736s" Sep 10 00:21:23.204805 containerd[1438]: time="2025-09-10T00:21:23.204751500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 00:21:23.207246 containerd[1438]: time="2025-09-10T00:21:23.207202807Z" level=info msg="CreateContainer within sandbox \"b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 00:21:23.228300 containerd[1438]: time="2025-09-10T00:21:23.228254890Z" level=info msg="CreateContainer within sandbox \"b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"87b7c17c5ee793c6d386b6483cfb67076fc6ac57db7447ebdfec4ff60587ed02\"" Sep 10 00:21:23.228847 containerd[1438]: time="2025-09-10T00:21:23.228820514Z" level=info msg="StartContainer for \"87b7c17c5ee793c6d386b6483cfb67076fc6ac57db7447ebdfec4ff60587ed02\"" Sep 10 00:21:23.270238 systemd[1]: Started cri-containerd-87b7c17c5ee793c6d386b6483cfb67076fc6ac57db7447ebdfec4ff60587ed02.scope - libcontainer container 87b7c17c5ee793c6d386b6483cfb67076fc6ac57db7447ebdfec4ff60587ed02. Sep 10 00:21:23.310061 containerd[1438]: time="2025-09-10T00:21:23.310002271Z" level=info msg="StartContainer for \"87b7c17c5ee793c6d386b6483cfb67076fc6ac57db7447ebdfec4ff60587ed02\" returns successfully" Sep 10 00:21:23.311875 containerd[1438]: time="2025-09-10T00:21:23.311687344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 00:21:23.393574 kubelet[2466]: I0910 00:21:23.393527 2466 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53dbb309-ea6d-426a-a348-2c871575cb3d" path="/var/lib/kubelet/pods/53dbb309-ea6d-426a-a348-2c871575cb3d/volumes" Sep 10 00:21:23.496221 systemd-networkd[1381]: cali519fecbb005: Gained IPv6LL Sep 10 00:21:24.520215 systemd-networkd[1381]: vxlan.calico: Gained IPv6LL Sep 10 00:21:25.004836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2245539526.mount: Deactivated successfully. Sep 10 00:21:25.050071 containerd[1438]: time="2025-09-10T00:21:25.050011366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:25.050712 containerd[1438]: time="2025-09-10T00:21:25.050681193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 00:21:25.051475 containerd[1438]: time="2025-09-10T00:21:25.051451465Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:25.053529 containerd[1438]: time="2025-09-10T00:21:25.053498908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:25.054385 containerd[1438]: time="2025-09-10T00:21:25.054357184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.742637758s" Sep 10 00:21:25.054426 containerd[1438]: time="2025-09-10T00:21:25.054391105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 00:21:25.056589 containerd[1438]: time="2025-09-10T00:21:25.056559874Z" level=info msg="CreateContainer within sandbox \"b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 00:21:25.066169 containerd[1438]: time="2025-09-10T00:21:25.066128586Z" level=info msg="CreateContainer within sandbox \"b09fcea5f3b8e245a6f56d32e8e1de7678035d0c74a1e9c28624a3229ab4579b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7a5501da6190b8aa3090fb524ca04306d45efbeefc46bffd4fdc74ea7b888d3c\"" Sep 10 00:21:25.067889 containerd[1438]: time="2025-09-10T00:21:25.066811454Z" level=info msg="StartContainer for \"7a5501da6190b8aa3090fb524ca04306d45efbeefc46bffd4fdc74ea7b888d3c\"" Sep 10 00:21:25.104199 systemd[1]: Started cri-containerd-7a5501da6190b8aa3090fb524ca04306d45efbeefc46bffd4fdc74ea7b888d3c.scope - libcontainer container 7a5501da6190b8aa3090fb524ca04306d45efbeefc46bffd4fdc74ea7b888d3c. Sep 10 00:21:25.164464 containerd[1438]: time="2025-09-10T00:21:25.164418572Z" level=info msg="StartContainer for \"7a5501da6190b8aa3090fb524ca04306d45efbeefc46bffd4fdc74ea7b888d3c\" returns successfully" Sep 10 00:21:29.393116 containerd[1438]: time="2025-09-10T00:21:29.392573474Z" level=info msg="StopPodSandbox for \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\"" Sep 10 00:21:29.393116 containerd[1438]: time="2025-09-10T00:21:29.392623756Z" level=info msg="StopPodSandbox for \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\"" Sep 10 00:21:29.393116 containerd[1438]: time="2025-09-10T00:21:29.392966449Z" level=info msg="StopPodSandbox for \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\"" Sep 10 00:21:29.394999 containerd[1438]: time="2025-09-10T00:21:29.393646433Z" level=info msg="StopPodSandbox for \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\"" Sep 10 00:21:29.448746 kubelet[2466]: I0910 00:21:29.448654 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-797fc48d96-htrjs" podStartSLOduration=5.531994178 podStartE2EDuration="8.448609587s" podCreationTimestamp="2025-09-10 00:21:21 +0000 UTC" firstStartedPulling="2025-09-10 00:21:22.138800378 +0000 UTC m=+32.836920424" lastFinishedPulling="2025-09-10 00:21:25.055415787 +0000 UTC m=+35.753535833" observedRunningTime="2025-09-10 00:21:25.570747456 +0000 UTC m=+36.268867502" watchObservedRunningTime="2025-09-10 00:21:29.448609587 +0000 UTC m=+40.146729633" Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.454 [INFO][4260] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.454 [INFO][4260] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" iface="eth0" netns="/var/run/netns/cni-0279b80f-e98a-b1f8-177f-973cd984b9e0" Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.454 [INFO][4260] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" iface="eth0" netns="/var/run/netns/cni-0279b80f-e98a-b1f8-177f-973cd984b9e0" Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.454 [INFO][4260] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" iface="eth0" netns="/var/run/netns/cni-0279b80f-e98a-b1f8-177f-973cd984b9e0" Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.454 [INFO][4260] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.454 [INFO][4260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.493 [INFO][4294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" HandleID="k8s-pod-network.2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.493 [INFO][4294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.494 [INFO][4294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.503 [WARNING][4294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" HandleID="k8s-pod-network.2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.503 [INFO][4294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" HandleID="k8s-pod-network.2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.505 [INFO][4294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:29.508712 containerd[1438]: 2025-09-10 00:21:29.506 [INFO][4260] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:29.509356 containerd[1438]: time="2025-09-10T00:21:29.509324309Z" level=info msg="TearDown network for sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\" successfully" Sep 10 00:21:29.509356 containerd[1438]: time="2025-09-10T00:21:29.509355910Z" level=info msg="StopPodSandbox for \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\" returns successfully" Sep 10 00:21:29.509737 kubelet[2466]: E0910 00:21:29.509716 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:29.511247 containerd[1438]: time="2025-09-10T00:21:29.511214057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4mrqt,Uid:bbdc19e7-71b6-4a83-8e74-efca458fc0cb,Namespace:kube-system,Attempt:1,}" Sep 10 00:21:29.512965 systemd[1]: run-netns-cni\x2d0279b80f\x2de98a\x2db1f8\x2d177f\x2d973cd984b9e0.mount: Deactivated successfully. Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.469 [INFO][4265] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.469 [INFO][4265] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" iface="eth0" netns="/var/run/netns/cni-d6b7526c-20d8-1c69-3244-6e56d391cd43" Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.469 [INFO][4265] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" iface="eth0" netns="/var/run/netns/cni-d6b7526c-20d8-1c69-3244-6e56d391cd43" Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.470 [INFO][4265] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" iface="eth0" netns="/var/run/netns/cni-d6b7526c-20d8-1c69-3244-6e56d391cd43" Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.470 [INFO][4265] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.470 [INFO][4265] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.498 [INFO][4307] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" HandleID="k8s-pod-network.e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.498 [INFO][4307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.505 [INFO][4307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.513 [WARNING][4307] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" HandleID="k8s-pod-network.e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.513 [INFO][4307] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" HandleID="k8s-pod-network.e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.516 [INFO][4307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:29.524248 containerd[1438]: 2025-09-10 00:21:29.518 [INFO][4265] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:29.525794 containerd[1438]: time="2025-09-10T00:21:29.525751905Z" level=info msg="TearDown network for sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\" successfully" Sep 10 00:21:29.525887 containerd[1438]: time="2025-09-10T00:21:29.525869669Z" level=info msg="StopPodSandbox for \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\" returns successfully" Sep 10 00:21:29.527254 systemd[1]: run-netns-cni\x2dd6b7526c\x2d20d8\x2d1c69\x2d3244\x2d6e56d391cd43.mount: Deactivated successfully. Sep 10 00:21:29.527709 containerd[1438]: time="2025-09-10T00:21:29.527676214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x5mwf,Uid:375bc444-bbf5-4839-b395-b0f406ed06db,Namespace:calico-system,Attempt:1,}" Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.449 [INFO][4259] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.449 [INFO][4259] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" iface="eth0" netns="/var/run/netns/cni-08864d7e-4847-874a-7908-0a37ecf464ba" Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.451 [INFO][4259] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" iface="eth0" netns="/var/run/netns/cni-08864d7e-4847-874a-7908-0a37ecf464ba" Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.458 [INFO][4259] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" iface="eth0" netns="/var/run/netns/cni-08864d7e-4847-874a-7908-0a37ecf464ba" Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.458 [INFO][4259] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.458 [INFO][4259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.498 [INFO][4296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" HandleID="k8s-pod-network.f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.498 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.516 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.531 [WARNING][4296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" HandleID="k8s-pod-network.f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.531 [INFO][4296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" HandleID="k8s-pod-network.f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.533 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:29.542162 containerd[1438]: 2025-09-10 00:21:29.538 [INFO][4259] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:29.543147 containerd[1438]: time="2025-09-10T00:21:29.542299785Z" level=info msg="TearDown network for sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\" successfully" Sep 10 00:21:29.543147 containerd[1438]: time="2025-09-10T00:21:29.542320866Z" level=info msg="StopPodSandbox for \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\" returns successfully" Sep 10 00:21:29.545623 containerd[1438]: time="2025-09-10T00:21:29.545406137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b45cc59c-mrqjs,Uid:4f46fd77-2fed-494f-b2c2-daf32e135470,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:21:29.546771 systemd[1]: run-netns-cni\x2d08864d7e\x2d4847\x2d874a\x2d7908\x2d0a37ecf464ba.mount: Deactivated successfully. Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.476 [INFO][4273] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.477 [INFO][4273] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" iface="eth0" netns="/var/run/netns/cni-4cf2b3f9-127a-df29-cfca-6e4640c69eb9" Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.477 [INFO][4273] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" iface="eth0" netns="/var/run/netns/cni-4cf2b3f9-127a-df29-cfca-6e4640c69eb9" Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.478 [INFO][4273] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" iface="eth0" netns="/var/run/netns/cni-4cf2b3f9-127a-df29-cfca-6e4640c69eb9" Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.478 [INFO][4273] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.478 [INFO][4273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.530 [INFO][4313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" HandleID="k8s-pod-network.9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.530 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.533 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.544 [WARNING][4313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" HandleID="k8s-pod-network.9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.544 [INFO][4313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" HandleID="k8s-pod-network.9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.547 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:29.557871 containerd[1438]: 2025-09-10 00:21:29.553 [INFO][4273] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:29.558452 containerd[1438]: time="2025-09-10T00:21:29.558427650Z" level=info msg="TearDown network for sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\" successfully" Sep 10 00:21:29.558522 containerd[1438]: time="2025-09-10T00:21:29.558509533Z" level=info msg="StopPodSandbox for \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\" returns successfully" Sep 10 00:21:29.559323 containerd[1438]: time="2025-09-10T00:21:29.559268760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2dhv8,Uid:c0d446ad-5074-4566-b66e-22a6ab7ca731,Namespace:calico-system,Attempt:1,}" Sep 10 00:21:29.674331 systemd-networkd[1381]: cali5b28740e800: Link UP Sep 10 00:21:29.677150 systemd-networkd[1381]: cali5b28740e800: Gained carrier Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.591 [INFO][4336] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--x5mwf-eth0 goldmane-54d579b49d- calico-system 375bc444-bbf5-4839-b395-b0f406ed06db 956 0 2025-09-10 00:21:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-x5mwf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5b28740e800 [] [] }} ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Namespace="calico-system" Pod="goldmane-54d579b49d-x5mwf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x5mwf-" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.591 [INFO][4336] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Namespace="calico-system" Pod="goldmane-54d579b49d-x5mwf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.622 [INFO][4381] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" HandleID="k8s-pod-network.b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.622 [INFO][4381] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" HandleID="k8s-pod-network.b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a4af0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-x5mwf", "timestamp":"2025-09-10 00:21:29.622794424 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.623 [INFO][4381] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.623 [INFO][4381] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.623 [INFO][4381] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.635 [INFO][4381] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" host="localhost" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.641 [INFO][4381] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.646 [INFO][4381] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.648 [INFO][4381] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.650 [INFO][4381] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.650 [INFO][4381] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" host="localhost" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.653 [INFO][4381] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.656 [INFO][4381] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" host="localhost" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.663 [INFO][4381] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" host="localhost" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.664 [INFO][4381] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" host="localhost" Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.664 [INFO][4381] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:29.686887 containerd[1438]: 2025-09-10 00:21:29.664 [INFO][4381] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" HandleID="k8s-pod-network.b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:29.687551 containerd[1438]: 2025-09-10 00:21:29.672 [INFO][4336] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Namespace="calico-system" Pod="goldmane-54d579b49d-x5mwf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--x5mwf-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"375bc444-bbf5-4839-b395-b0f406ed06db", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-x5mwf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b28740e800", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:29.687551 containerd[1438]: 2025-09-10 00:21:29.672 [INFO][4336] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Namespace="calico-system" Pod="goldmane-54d579b49d-x5mwf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:29.687551 containerd[1438]: 2025-09-10 00:21:29.672 [INFO][4336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b28740e800 ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Namespace="calico-system" Pod="goldmane-54d579b49d-x5mwf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:29.687551 containerd[1438]: 2025-09-10 00:21:29.674 [INFO][4336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Namespace="calico-system" Pod="goldmane-54d579b49d-x5mwf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:29.687551 containerd[1438]: 2025-09-10 00:21:29.674 [INFO][4336] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Namespace="calico-system" Pod="goldmane-54d579b49d-x5mwf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--x5mwf-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"375bc444-bbf5-4839-b395-b0f406ed06db", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f", Pod:"goldmane-54d579b49d-x5mwf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b28740e800", MAC:"9a:9e:18:2e:ba:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:29.687551 containerd[1438]: 2025-09-10 00:21:29.683 [INFO][4336] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f" Namespace="calico-system" Pod="goldmane-54d579b49d-x5mwf" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:29.706843 containerd[1438]: time="2025-09-10T00:21:29.706762630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:21:29.707698 containerd[1438]: time="2025-09-10T00:21:29.707202206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:21:29.707698 containerd[1438]: time="2025-09-10T00:21:29.707540578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:29.707698 containerd[1438]: time="2025-09-10T00:21:29.707637061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:29.732551 systemd[1]: Started cri-containerd-b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f.scope - libcontainer container b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f. Sep 10 00:21:29.736624 systemd[1]: Started sshd@7-10.0.0.141:22-10.0.0.1:37990.service - OpenSSH per-connection server daemon (10.0.0.1:37990). Sep 10 00:21:29.750770 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:21:29.777286 systemd-networkd[1381]: calia79fdaf6e14: Link UP Sep 10 00:21:29.777478 systemd-networkd[1381]: calia79fdaf6e14: Gained carrier Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.592 [INFO][4327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0 coredns-668d6bf9bc- kube-system bbdc19e7-71b6-4a83-8e74-efca458fc0cb 955 0 2025-09-10 00:20:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4mrqt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia79fdaf6e14 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Namespace="kube-system" Pod="coredns-668d6bf9bc-4mrqt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4mrqt-" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.592 [INFO][4327] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Namespace="kube-system" Pod="coredns-668d6bf9bc-4mrqt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.627 [INFO][4380] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" HandleID="k8s-pod-network.69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.627 [INFO][4380] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" HandleID="k8s-pod-network.69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4mrqt", "timestamp":"2025-09-10 00:21:29.627073619 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.627 [INFO][4380] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.664 [INFO][4380] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.664 [INFO][4380] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.734 [INFO][4380] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" host="localhost" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.742 [INFO][4380] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.747 [INFO][4380] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.750 [INFO][4380] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.752 [INFO][4380] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.752 [INFO][4380] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" host="localhost" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.754 [INFO][4380] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410 Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.760 [INFO][4380] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" host="localhost" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.769 [INFO][4380] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" host="localhost" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.769 [INFO][4380] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" host="localhost" Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.769 [INFO][4380] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:29.794578 containerd[1438]: 2025-09-10 00:21:29.769 [INFO][4380] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" HandleID="k8s-pod-network.69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:29.795755 containerd[1438]: 2025-09-10 00:21:29.774 [INFO][4327] cni-plugin/k8s.go 418: Populated endpoint ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Namespace="kube-system" Pod="coredns-668d6bf9bc-4mrqt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bbdc19e7-71b6-4a83-8e74-efca458fc0cb", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4mrqt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia79fdaf6e14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:29.795755 containerd[1438]: 2025-09-10 00:21:29.774 [INFO][4327] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Namespace="kube-system" Pod="coredns-668d6bf9bc-4mrqt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:29.795755 containerd[1438]: 2025-09-10 00:21:29.774 [INFO][4327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia79fdaf6e14 ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Namespace="kube-system" Pod="coredns-668d6bf9bc-4mrqt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:29.795755 containerd[1438]: 2025-09-10 00:21:29.776 [INFO][4327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Namespace="kube-system" Pod="coredns-668d6bf9bc-4mrqt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:29.795755 containerd[1438]: 2025-09-10 00:21:29.778 [INFO][4327] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Namespace="kube-system" Pod="coredns-668d6bf9bc-4mrqt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bbdc19e7-71b6-4a83-8e74-efca458fc0cb", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410", Pod:"coredns-668d6bf9bc-4mrqt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia79fdaf6e14", MAC:"f2:33:c4:7b:5f:a5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:29.795755 containerd[1438]: 2025-09-10 00:21:29.791 [INFO][4327] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410" Namespace="kube-system" Pod="coredns-668d6bf9bc-4mrqt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:29.799215 containerd[1438]: time="2025-09-10T00:21:29.799171261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-x5mwf,Uid:375bc444-bbf5-4839-b395-b0f406ed06db,Namespace:calico-system,Attempt:1,} returns sandbox id \"b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f\"" Sep 10 00:21:29.800772 containerd[1438]: time="2025-09-10T00:21:29.800745878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 00:21:29.805884 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 37990 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:29.806786 sshd[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:29.813426 systemd-logind[1422]: New session 8 of user core. Sep 10 00:21:29.818239 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:21:29.818831 containerd[1438]: time="2025-09-10T00:21:29.818561965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:21:29.818831 containerd[1438]: time="2025-09-10T00:21:29.818634087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:21:29.818831 containerd[1438]: time="2025-09-10T00:21:29.818645688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:29.818831 containerd[1438]: time="2025-09-10T00:21:29.818751971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:29.835724 systemd[1]: Started cri-containerd-69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410.scope - libcontainer container 69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410. Sep 10 00:21:29.848766 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:21:29.874482 systemd-networkd[1381]: calie1bbc7ea5a2: Link UP Sep 10 00:21:29.876018 systemd-networkd[1381]: calie1bbc7ea5a2: Gained carrier Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.622 [INFO][4354] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0 calico-apiserver-9b45cc59c- calico-apiserver 4f46fd77-2fed-494f-b2c2-daf32e135470 954 0 2025-09-10 00:21:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b45cc59c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9b45cc59c-mrqjs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie1bbc7ea5a2 [] [] }} ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-mrqjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.622 [INFO][4354] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-mrqjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.658 [INFO][4400] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" HandleID="k8s-pod-network.9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.658 [INFO][4400] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" HandleID="k8s-pod-network.9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400018d130), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9b45cc59c-mrqjs", "timestamp":"2025-09-10 00:21:29.658835451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.658 [INFO][4400] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.771 [INFO][4400] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.771 [INFO][4400] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.834 [INFO][4400] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" host="localhost" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.844 [INFO][4400] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.849 [INFO][4400] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.851 [INFO][4400] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.854 [INFO][4400] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.854 [INFO][4400] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" host="localhost" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.856 [INFO][4400] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.859 [INFO][4400] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" host="localhost" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.865 [INFO][4400] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" host="localhost" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.865 [INFO][4400] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" host="localhost" Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.865 [INFO][4400] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:29.894135 containerd[1438]: 2025-09-10 00:21:29.865 [INFO][4400] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" HandleID="k8s-pod-network.9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:29.896008 containerd[1438]: 2025-09-10 00:21:29.870 [INFO][4354] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-mrqjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0", GenerateName:"calico-apiserver-9b45cc59c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f46fd77-2fed-494f-b2c2-daf32e135470", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b45cc59c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9b45cc59c-mrqjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1bbc7ea5a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:29.896008 containerd[1438]: 2025-09-10 00:21:29.870 [INFO][4354] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-mrqjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:29.896008 containerd[1438]: 2025-09-10 00:21:29.870 [INFO][4354] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1bbc7ea5a2 ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-mrqjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:29.896008 containerd[1438]: 2025-09-10 00:21:29.878 [INFO][4354] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-mrqjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:29.896008 containerd[1438]: 2025-09-10 00:21:29.878 [INFO][4354] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-mrqjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0", GenerateName:"calico-apiserver-9b45cc59c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f46fd77-2fed-494f-b2c2-daf32e135470", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b45cc59c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a", Pod:"calico-apiserver-9b45cc59c-mrqjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1bbc7ea5a2", MAC:"12:ed:3b:3a:fb:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:29.896008 containerd[1438]: 2025-09-10 00:21:29.888 [INFO][4354] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-mrqjs" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:29.899043 containerd[1438]: time="2025-09-10T00:21:29.899007602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4mrqt,Uid:bbdc19e7-71b6-4a83-8e74-efca458fc0cb,Namespace:kube-system,Attempt:1,} returns sandbox id \"69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410\"" Sep 10 00:21:29.899803 kubelet[2466]: E0910 00:21:29.899727 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:29.902781 containerd[1438]: time="2025-09-10T00:21:29.902745618Z" level=info msg="CreateContainer within sandbox \"69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:21:29.916583 containerd[1438]: time="2025-09-10T00:21:29.915534402Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:21:29.916583 containerd[1438]: time="2025-09-10T00:21:29.915590324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:21:29.916583 containerd[1438]: time="2025-09-10T00:21:29.915605084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:29.916583 containerd[1438]: time="2025-09-10T00:21:29.915677207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:29.924263 containerd[1438]: time="2025-09-10T00:21:29.924216997Z" level=info msg="CreateContainer within sandbox \"69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90b424abb008587b554ebfe2fe27d645ebef1775daf219bc041f0058e3bcc274\"" Sep 10 00:21:29.926516 containerd[1438]: time="2025-09-10T00:21:29.925964980Z" level=info msg="StartContainer for \"90b424abb008587b554ebfe2fe27d645ebef1775daf219bc041f0058e3bcc274\"" Sep 10 00:21:29.944722 systemd[1]: Started cri-containerd-9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a.scope - libcontainer container 9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a. Sep 10 00:21:29.950756 systemd[1]: Started cri-containerd-90b424abb008587b554ebfe2fe27d645ebef1775daf219bc041f0058e3bcc274.scope - libcontainer container 90b424abb008587b554ebfe2fe27d645ebef1775daf219bc041f0058e3bcc274. Sep 10 00:21:29.972806 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:21:29.981442 systemd-networkd[1381]: cali9806c04a82e: Link UP Sep 10 00:21:29.983984 systemd-networkd[1381]: cali9806c04a82e: Gained carrier Sep 10 00:21:30.010070 containerd[1438]: time="2025-09-10T00:21:30.009939257Z" level=info msg="StartContainer for \"90b424abb008587b554ebfe2fe27d645ebef1775daf219bc041f0058e3bcc274\" returns successfully" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.640 [INFO][4370] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2dhv8-eth0 csi-node-driver- calico-system c0d446ad-5074-4566-b66e-22a6ab7ca731 957 0 2025-09-10 00:21:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2dhv8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9806c04a82e [] [] }} ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Namespace="calico-system" Pod="csi-node-driver-2dhv8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2dhv8-" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.640 [INFO][4370] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Namespace="calico-system" Pod="csi-node-driver-2dhv8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.672 [INFO][4406] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" HandleID="k8s-pod-network.f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.672 [INFO][4406] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" HandleID="k8s-pod-network.f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000528b20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2dhv8", "timestamp":"2025-09-10 00:21:29.672009649 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.672 [INFO][4406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.866 [INFO][4406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.867 [INFO][4406] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.935 [INFO][4406] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" host="localhost" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.943 [INFO][4406] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.950 [INFO][4406] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.952 [INFO][4406] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.954 [INFO][4406] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.954 [INFO][4406] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" host="localhost" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.956 [INFO][4406] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.960 [INFO][4406] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" host="localhost" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.967 [INFO][4406] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" host="localhost" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.969 [INFO][4406] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" host="localhost" Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.969 [INFO][4406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:30.018218 containerd[1438]: 2025-09-10 00:21:29.969 [INFO][4406] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" HandleID="k8s-pod-network.f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:30.018757 containerd[1438]: 2025-09-10 00:21:29.978 [INFO][4370] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Namespace="calico-system" Pod="csi-node-driver-2dhv8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2dhv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2dhv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0d446ad-5074-4566-b66e-22a6ab7ca731", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2dhv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9806c04a82e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:30.018757 containerd[1438]: 2025-09-10 00:21:29.978 [INFO][4370] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Namespace="calico-system" Pod="csi-node-driver-2dhv8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:30.018757 containerd[1438]: 2025-09-10 00:21:29.978 [INFO][4370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9806c04a82e ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Namespace="calico-system" Pod="csi-node-driver-2dhv8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:30.018757 containerd[1438]: 2025-09-10 00:21:29.983 [INFO][4370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Namespace="calico-system" Pod="csi-node-driver-2dhv8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:30.018757 containerd[1438]: 2025-09-10 00:21:29.985 [INFO][4370] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Namespace="calico-system" Pod="csi-node-driver-2dhv8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2dhv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2dhv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0d446ad-5074-4566-b66e-22a6ab7ca731", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e", Pod:"csi-node-driver-2dhv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9806c04a82e", MAC:"ee:3b:58:be:ec:17", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:30.018757 containerd[1438]: 2025-09-10 00:21:30.004 [INFO][4370] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e" Namespace="calico-system" Pod="csi-node-driver-2dhv8" WorkloadEndpoint="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:30.040092 containerd[1438]: time="2025-09-10T00:21:30.039962436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:21:30.040092 containerd[1438]: time="2025-09-10T00:21:30.040017478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:21:30.040253 containerd[1438]: time="2025-09-10T00:21:30.040041239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:30.040493 containerd[1438]: time="2025-09-10T00:21:30.040333249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:30.041062 containerd[1438]: time="2025-09-10T00:21:30.040846347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b45cc59c-mrqjs,Uid:4f46fd77-2fed-494f-b2c2-daf32e135470,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a\"" Sep 10 00:21:30.069240 systemd[1]: Started cri-containerd-f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e.scope - libcontainer container f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e. Sep 10 00:21:30.085324 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:21:30.097086 containerd[1438]: time="2025-09-10T00:21:30.097030729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2dhv8,Uid:c0d446ad-5074-4566-b66e-22a6ab7ca731,Namespace:calico-system,Attempt:1,} returns sandbox id \"f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e\"" Sep 10 00:21:30.194444 sshd[4457]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:30.198233 systemd[1]: sshd@7-10.0.0.141:22-10.0.0.1:37990.service: Deactivated successfully. Sep 10 00:21:30.201549 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:21:30.202850 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:21:30.203831 systemd-logind[1422]: Removed session 8. Sep 10 00:21:30.391954 containerd[1438]: time="2025-09-10T00:21:30.391915410Z" level=info msg="StopPodSandbox for \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\"" Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.433 [INFO][4698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.433 [INFO][4698] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" iface="eth0" netns="/var/run/netns/cni-064c4b81-0f97-23e2-72ae-13c44984e443" Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.434 [INFO][4698] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" iface="eth0" netns="/var/run/netns/cni-064c4b81-0f97-23e2-72ae-13c44984e443" Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.434 [INFO][4698] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" iface="eth0" netns="/var/run/netns/cni-064c4b81-0f97-23e2-72ae-13c44984e443" Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.434 [INFO][4698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.434 [INFO][4698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.454 [INFO][4708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" HandleID="k8s-pod-network.9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.454 [INFO][4708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.454 [INFO][4708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.463 [WARNING][4708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" HandleID="k8s-pod-network.9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.463 [INFO][4708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" HandleID="k8s-pod-network.9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.464 [INFO][4708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:30.469371 containerd[1438]: 2025-09-10 00:21:30.466 [INFO][4698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:30.470273 containerd[1438]: time="2025-09-10T00:21:30.470022686Z" level=info msg="TearDown network for sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\" successfully" Sep 10 00:21:30.470273 containerd[1438]: time="2025-09-10T00:21:30.470061087Z" level=info msg="StopPodSandbox for \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\" returns successfully" Sep 10 00:21:30.471005 kubelet[2466]: E0910 00:21:30.470977 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:30.471770 containerd[1438]: time="2025-09-10T00:21:30.471744186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlwmv,Uid:e347e663-fe05-4ba7-b6ca-d630372bf51c,Namespace:kube-system,Attempt:1,}" Sep 10 00:21:30.524036 systemd[1]: run-netns-cni\x2d064c4b81\x2d0f97\x2d23e2\x2d72ae\x2d13c44984e443.mount: Deactivated successfully. Sep 10 00:21:30.524451 systemd[1]: run-netns-cni\x2d4cf2b3f9\x2d127a\x2ddf29\x2dcfca\x2d6e4640c69eb9.mount: Deactivated successfully. Sep 10 00:21:30.580540 kubelet[2466]: E0910 00:21:30.580003 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:30.616088 kubelet[2466]: I0910 00:21:30.615092 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4mrqt" podStartSLOduration=34.615074042 podStartE2EDuration="34.615074042s" podCreationTimestamp="2025-09-10 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:21:30.596266139 +0000 UTC m=+41.294386265" watchObservedRunningTime="2025-09-10 00:21:30.615074042 +0000 UTC m=+41.313194048" Sep 10 00:21:30.622969 systemd-networkd[1381]: calia928239cd82: Link UP Sep 10 00:21:30.624507 systemd-networkd[1381]: calia928239cd82: Gained carrier Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.527 [INFO][4716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0 coredns-668d6bf9bc- kube-system e347e663-fe05-4ba7-b6ca-d630372bf51c 990 0 2025-09-10 00:20:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-tlwmv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia928239cd82 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlwmv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlwmv-" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.527 [INFO][4716] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlwmv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.553 [INFO][4730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" HandleID="k8s-pod-network.49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.553 [INFO][4730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" HandleID="k8s-pod-network.49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-tlwmv", "timestamp":"2025-09-10 00:21:30.552990012 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.553 [INFO][4730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.553 [INFO][4730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.553 [INFO][4730] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.562 [INFO][4730] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" host="localhost" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.569 [INFO][4730] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.574 [INFO][4730] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.576 [INFO][4730] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.579 [INFO][4730] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.579 [INFO][4730] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" host="localhost" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.581 [INFO][4730] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47 Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.589 [INFO][4730] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" host="localhost" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.612 [INFO][4730] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" host="localhost" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.612 [INFO][4730] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" host="localhost" Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.612 [INFO][4730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:30.651917 containerd[1438]: 2025-09-10 00:21:30.612 [INFO][4730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" HandleID="k8s-pod-network.49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:30.652552 containerd[1438]: 2025-09-10 00:21:30.617 [INFO][4716] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlwmv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e347e663-fe05-4ba7-b6ca-d630372bf51c", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-tlwmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia928239cd82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:30.652552 containerd[1438]: 2025-09-10 00:21:30.617 [INFO][4716] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlwmv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:30.652552 containerd[1438]: 2025-09-10 00:21:30.618 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia928239cd82 ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlwmv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:30.652552 containerd[1438]: 2025-09-10 00:21:30.623 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlwmv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:30.652552 containerd[1438]: 2025-09-10 00:21:30.626 [INFO][4716] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlwmv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e347e663-fe05-4ba7-b6ca-d630372bf51c", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47", Pod:"coredns-668d6bf9bc-tlwmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia928239cd82", MAC:"9e:2b:48:bf:6f:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:30.652552 containerd[1438]: 2025-09-10 00:21:30.647 [INFO][4716] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47" Namespace="kube-system" Pod="coredns-668d6bf9bc-tlwmv" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:30.675148 containerd[1438]: time="2025-09-10T00:21:30.675022677Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:21:30.675148 containerd[1438]: time="2025-09-10T00:21:30.675094959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:21:30.675148 containerd[1438]: time="2025-09-10T00:21:30.675110880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:30.675343 containerd[1438]: time="2025-09-10T00:21:30.675186682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:30.696408 systemd[1]: run-containerd-runc-k8s.io-49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47-runc.7KYc9V.mount: Deactivated successfully. Sep 10 00:21:30.708089 systemd[1]: Started cri-containerd-49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47.scope - libcontainer container 49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47. Sep 10 00:21:30.722932 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:21:30.741448 containerd[1438]: time="2025-09-10T00:21:30.741298774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-tlwmv,Uid:e347e663-fe05-4ba7-b6ca-d630372bf51c,Namespace:kube-system,Attempt:1,} returns sandbox id \"49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47\"" Sep 10 00:21:30.742280 kubelet[2466]: E0910 00:21:30.742257 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:30.745443 containerd[1438]: time="2025-09-10T00:21:30.745321796Z" level=info msg="CreateContainer within sandbox \"49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:21:30.759827 containerd[1438]: time="2025-09-10T00:21:30.759792507Z" level=info msg="CreateContainer within sandbox \"49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"324b5d3a8a56f0522b310ba0b719d4819ae23ab1242507b463a3709ef4e6a53d\"" Sep 10 00:21:30.760338 containerd[1438]: time="2025-09-10T00:21:30.760261483Z" level=info msg="StartContainer for \"324b5d3a8a56f0522b310ba0b719d4819ae23ab1242507b463a3709ef4e6a53d\"" Sep 10 00:21:30.789200 systemd[1]: Started cri-containerd-324b5d3a8a56f0522b310ba0b719d4819ae23ab1242507b463a3709ef4e6a53d.scope - libcontainer container 324b5d3a8a56f0522b310ba0b719d4819ae23ab1242507b463a3709ef4e6a53d. Sep 10 00:21:30.810548 containerd[1438]: time="2025-09-10T00:21:30.810507216Z" level=info msg="StartContainer for \"324b5d3a8a56f0522b310ba0b719d4819ae23ab1242507b463a3709ef4e6a53d\" returns successfully" Sep 10 00:21:31.049175 systemd-networkd[1381]: cali9806c04a82e: Gained IPv6LL Sep 10 00:21:31.393078 containerd[1438]: time="2025-09-10T00:21:31.392502699Z" level=info msg="StopPodSandbox for \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\"" Sep 10 00:21:31.393078 containerd[1438]: time="2025-09-10T00:21:31.392748108Z" level=info msg="StopPodSandbox for \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\"" Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.446 [INFO][4846] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.446 [INFO][4846] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" iface="eth0" netns="/var/run/netns/cni-2bf0a1f7-27b2-6eb7-db1a-5fb2717b4555" Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.446 [INFO][4846] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" iface="eth0" netns="/var/run/netns/cni-2bf0a1f7-27b2-6eb7-db1a-5fb2717b4555" Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.447 [INFO][4846] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" iface="eth0" netns="/var/run/netns/cni-2bf0a1f7-27b2-6eb7-db1a-5fb2717b4555" Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.447 [INFO][4846] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.447 [INFO][4846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.473 [INFO][4874] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" HandleID="k8s-pod-network.3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.473 [INFO][4874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.474 [INFO][4874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.482 [WARNING][4874] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" HandleID="k8s-pod-network.3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.482 [INFO][4874] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" HandleID="k8s-pod-network.3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.483 [INFO][4874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:31.488210 containerd[1438]: 2025-09-10 00:21:31.485 [INFO][4846] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:31.489007 containerd[1438]: time="2025-09-10T00:21:31.488417873Z" level=info msg="TearDown network for sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\" successfully" Sep 10 00:21:31.489007 containerd[1438]: time="2025-09-10T00:21:31.488442194Z" level=info msg="StopPodSandbox for \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\" returns successfully" Sep 10 00:21:31.489555 containerd[1438]: time="2025-09-10T00:21:31.489530671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b45cc59c-f9r2t,Uid:4e6abdb0-8540-4e43-a17d-9abbd4d6ede7,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.451 [INFO][4861] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.451 [INFO][4861] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" iface="eth0" netns="/var/run/netns/cni-7ec7a257-5e56-3cf9-2e57-29a8deb6a118" Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.452 [INFO][4861] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" iface="eth0" netns="/var/run/netns/cni-7ec7a257-5e56-3cf9-2e57-29a8deb6a118" Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.452 [INFO][4861] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" iface="eth0" netns="/var/run/netns/cni-7ec7a257-5e56-3cf9-2e57-29a8deb6a118" Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.452 [INFO][4861] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.452 [INFO][4861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.478 [INFO][4880] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" HandleID="k8s-pod-network.7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.478 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.484 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.493 [WARNING][4880] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" HandleID="k8s-pod-network.7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.493 [INFO][4880] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" HandleID="k8s-pod-network.7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.495 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:31.498888 containerd[1438]: 2025-09-10 00:21:31.496 [INFO][4861] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:31.499279 containerd[1438]: time="2025-09-10T00:21:31.499037198Z" level=info msg="TearDown network for sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\" successfully" Sep 10 00:21:31.499279 containerd[1438]: time="2025-09-10T00:21:31.499070479Z" level=info msg="StopPodSandbox for \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\" returns successfully" Sep 10 00:21:31.500029 containerd[1438]: time="2025-09-10T00:21:31.499767223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844fff9f5c-zklz7,Uid:85708529-bae5-41c2-aae8-a8dcb5a3de9c,Namespace:calico-system,Attempt:1,}" Sep 10 00:21:31.517204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount187892928.mount: Deactivated successfully. Sep 10 00:21:31.517304 systemd[1]: run-netns-cni\x2d2bf0a1f7\x2d27b2\x2d6eb7\x2ddb1a\x2d5fb2717b4555.mount: Deactivated successfully. Sep 10 00:21:31.517356 systemd[1]: run-netns-cni\x2d7ec7a257\x2d5e56\x2d3cf9\x2d2e57\x2d29a8deb6a118.mount: Deactivated successfully. Sep 10 00:21:31.591720 kubelet[2466]: E0910 00:21:31.591139 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:31.591720 kubelet[2466]: E0910 00:21:31.591172 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:31.605681 kubelet[2466]: I0910 00:21:31.605623 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-tlwmv" podStartSLOduration=35.605588057 podStartE2EDuration="35.605588057s" podCreationTimestamp="2025-09-10 00:20:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:21:31.604761588 +0000 UTC m=+42.302881634" watchObservedRunningTime="2025-09-10 00:21:31.605588057 +0000 UTC m=+42.303708103" Sep 10 00:21:31.625538 systemd-networkd[1381]: cali5b28740e800: Gained IPv6LL Sep 10 00:21:31.674252 systemd-networkd[1381]: cali29e85dc0d14: Link UP Sep 10 00:21:31.675219 systemd-networkd[1381]: cali29e85dc0d14: Gained carrier Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.569 [INFO][4896] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0 calico-apiserver-9b45cc59c- calico-apiserver 4e6abdb0-8540-4e43-a17d-9abbd4d6ede7 1016 0 2025-09-10 00:21:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b45cc59c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-9b45cc59c-f9r2t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29e85dc0d14 [] [] }} ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-f9r2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.569 [INFO][4896] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-f9r2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.608 [INFO][4919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" HandleID="k8s-pod-network.b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.609 [INFO][4919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" HandleID="k8s-pod-network.b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-9b45cc59c-f9r2t", "timestamp":"2025-09-10 00:21:31.608559799 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.609 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.609 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.609 [INFO][4919] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.624 [INFO][4919] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" host="localhost" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.630 [INFO][4919] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.636 [INFO][4919] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.637 [INFO][4919] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.640 [INFO][4919] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.641 [INFO][4919] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" host="localhost" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.644 [INFO][4919] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.655 [INFO][4919] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" host="localhost" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.663 [INFO][4919] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" host="localhost" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.663 [INFO][4919] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" host="localhost" Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.663 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:31.692975 containerd[1438]: 2025-09-10 00:21:31.663 [INFO][4919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" HandleID="k8s-pod-network.b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:31.694192 containerd[1438]: 2025-09-10 00:21:31.670 [INFO][4896] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-f9r2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0", GenerateName:"calico-apiserver-9b45cc59c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e6abdb0-8540-4e43-a17d-9abbd4d6ede7", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b45cc59c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-9b45cc59c-f9r2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29e85dc0d14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:31.694192 containerd[1438]: 2025-09-10 00:21:31.670 [INFO][4896] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-f9r2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:31.694192 containerd[1438]: 2025-09-10 00:21:31.670 [INFO][4896] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29e85dc0d14 ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-f9r2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:31.694192 containerd[1438]: 2025-09-10 00:21:31.676 [INFO][4896] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-f9r2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:31.694192 containerd[1438]: 2025-09-10 00:21:31.677 [INFO][4896] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-f9r2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0", GenerateName:"calico-apiserver-9b45cc59c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e6abdb0-8540-4e43-a17d-9abbd4d6ede7", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b45cc59c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb", Pod:"calico-apiserver-9b45cc59c-f9r2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29e85dc0d14", MAC:"2a:62:d3:e9:d2:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:31.694192 containerd[1438]: 2025-09-10 00:21:31.686 [INFO][4896] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb" Namespace="calico-apiserver" Pod="calico-apiserver-9b45cc59c-f9r2t" WorkloadEndpoint="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:31.735256 containerd[1438]: time="2025-09-10T00:21:31.734825535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:21:31.735256 containerd[1438]: time="2025-09-10T00:21:31.734897697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:21:31.735256 containerd[1438]: time="2025-09-10T00:21:31.734913658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:31.735256 containerd[1438]: time="2025-09-10T00:21:31.735081983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:31.752574 systemd-networkd[1381]: calia79fdaf6e14: Gained IPv6LL Sep 10 00:21:31.767105 systemd-networkd[1381]: cali304f352a12a: Link UP Sep 10 00:21:31.767627 systemd-networkd[1381]: cali304f352a12a: Gained carrier Sep 10 00:21:31.770203 systemd[1]: Started cri-containerd-b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb.scope - libcontainer container b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb. Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.575 [INFO][4901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0 calico-kube-controllers-844fff9f5c- calico-system 85708529-bae5-41c2-aae8-a8dcb5a3de9c 1017 0 2025-09-10 00:21:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:844fff9f5c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-844fff9f5c-zklz7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali304f352a12a [] [] }} ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Namespace="calico-system" Pod="calico-kube-controllers-844fff9f5c-zklz7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.575 [INFO][4901] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Namespace="calico-system" Pod="calico-kube-controllers-844fff9f5c-zklz7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.624 [INFO][4927] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" HandleID="k8s-pod-network.29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.624 [INFO][4927] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" HandleID="k8s-pod-network.29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400040dd30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-844fff9f5c-zklz7", "timestamp":"2025-09-10 00:21:31.624096732 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.624 [INFO][4927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.663 [INFO][4927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.663 [INFO][4927] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.727 [INFO][4927] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" host="localhost" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.733 [INFO][4927] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.738 [INFO][4927] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.741 [INFO][4927] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.744 [INFO][4927] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.744 [INFO][4927] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" host="localhost" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.746 [INFO][4927] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961 Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.753 [INFO][4927] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" host="localhost" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.760 [INFO][4927] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" host="localhost" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.760 [INFO][4927] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" host="localhost" Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.760 [INFO][4927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:31.790915 containerd[1438]: 2025-09-10 00:21:31.760 [INFO][4927] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" HandleID="k8s-pod-network.29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:31.791072 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:21:31.791820 containerd[1438]: 2025-09-10 00:21:31.765 [INFO][4901] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Namespace="calico-system" Pod="calico-kube-controllers-844fff9f5c-zklz7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0", GenerateName:"calico-kube-controllers-844fff9f5c-", Namespace:"calico-system", SelfLink:"", UID:"85708529-bae5-41c2-aae8-a8dcb5a3de9c", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844fff9f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-844fff9f5c-zklz7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali304f352a12a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:31.791820 containerd[1438]: 2025-09-10 00:21:31.765 [INFO][4901] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Namespace="calico-system" Pod="calico-kube-controllers-844fff9f5c-zklz7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:31.791820 containerd[1438]: 2025-09-10 00:21:31.765 [INFO][4901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali304f352a12a ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Namespace="calico-system" Pod="calico-kube-controllers-844fff9f5c-zklz7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:31.791820 containerd[1438]: 2025-09-10 00:21:31.768 [INFO][4901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Namespace="calico-system" Pod="calico-kube-controllers-844fff9f5c-zklz7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:31.791820 containerd[1438]: 2025-09-10 00:21:31.770 [INFO][4901] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Namespace="calico-system" Pod="calico-kube-controllers-844fff9f5c-zklz7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0", GenerateName:"calico-kube-controllers-844fff9f5c-", Namespace:"calico-system", SelfLink:"", UID:"85708529-bae5-41c2-aae8-a8dcb5a3de9c", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844fff9f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961", Pod:"calico-kube-controllers-844fff9f5c-zklz7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali304f352a12a", MAC:"fe:5d:4b:dd:36:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:31.791820 containerd[1438]: 2025-09-10 00:21:31.783 [INFO][4901] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961" Namespace="calico-system" Pod="calico-kube-controllers-844fff9f5c-zklz7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:31.812632 containerd[1438]: time="2025-09-10T00:21:31.812468201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:21:31.812632 containerd[1438]: time="2025-09-10T00:21:31.812522363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:21:31.812632 containerd[1438]: time="2025-09-10T00:21:31.812533323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:31.812979 containerd[1438]: time="2025-09-10T00:21:31.812611046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:21:31.816208 systemd-networkd[1381]: calia928239cd82: Gained IPv6LL Sep 10 00:21:31.822618 containerd[1438]: time="2025-09-10T00:21:31.822212935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b45cc59c-f9r2t,Uid:4e6abdb0-8540-4e43-a17d-9abbd4d6ede7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb\"" Sep 10 00:21:31.842201 systemd[1]: Started cri-containerd-29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961.scope - libcontainer container 29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961. Sep 10 00:21:31.852797 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:21:31.874525 containerd[1438]: time="2025-09-10T00:21:31.874490291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-844fff9f5c-zklz7,Uid:85708529-bae5-41c2-aae8-a8dcb5a3de9c,Namespace:calico-system,Attempt:1,} returns sandbox id \"29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961\"" Sep 10 00:21:31.880207 systemd-networkd[1381]: calie1bbc7ea5a2: Gained IPv6LL Sep 10 00:21:32.219966 containerd[1438]: time="2025-09-10T00:21:32.219920401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:32.220527 containerd[1438]: time="2025-09-10T00:21:32.220498380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 00:21:32.221741 containerd[1438]: time="2025-09-10T00:21:32.221702021Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:32.224437 containerd[1438]: time="2025-09-10T00:21:32.223952216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:32.225026 containerd[1438]: time="2025-09-10T00:21:32.224920848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.424141689s" Sep 10 00:21:32.225026 containerd[1438]: time="2025-09-10T00:21:32.224951929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 00:21:32.226475 containerd[1438]: time="2025-09-10T00:21:32.226448180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:21:32.228331 containerd[1438]: time="2025-09-10T00:21:32.227191324Z" level=info msg="CreateContainer within sandbox \"b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 00:21:32.238001 containerd[1438]: time="2025-09-10T00:21:32.237923404Z" level=info msg="CreateContainer within sandbox \"b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"19bd4efa7ea20b9dac4eeb907d4691947d0f852936dead3ff12dd4a734ba4ba3\"" Sep 10 00:21:32.238476 containerd[1438]: time="2025-09-10T00:21:32.238428980Z" level=info msg="StartContainer for \"19bd4efa7ea20b9dac4eeb907d4691947d0f852936dead3ff12dd4a734ba4ba3\"" Sep 10 00:21:32.263201 systemd[1]: Started cri-containerd-19bd4efa7ea20b9dac4eeb907d4691947d0f852936dead3ff12dd4a734ba4ba3.scope - libcontainer container 19bd4efa7ea20b9dac4eeb907d4691947d0f852936dead3ff12dd4a734ba4ba3. Sep 10 00:21:32.292105 containerd[1438]: time="2025-09-10T00:21:32.292063935Z" level=info msg="StartContainer for \"19bd4efa7ea20b9dac4eeb907d4691947d0f852936dead3ff12dd4a734ba4ba3\" returns successfully" Sep 10 00:21:32.514681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount785572324.mount: Deactivated successfully. Sep 10 00:21:32.600353 kubelet[2466]: E0910 00:21:32.599970 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:32.601923 kubelet[2466]: E0910 00:21:32.601552 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:32.620069 kubelet[2466]: I0910 00:21:32.619899 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-x5mwf" podStartSLOduration=22.194494534 podStartE2EDuration="24.619881426s" podCreationTimestamp="2025-09-10 00:21:08 +0000 UTC" firstStartedPulling="2025-09-10 00:21:29.80052611 +0000 UTC m=+40.498646156" lastFinishedPulling="2025-09-10 00:21:32.225913042 +0000 UTC m=+42.924033048" observedRunningTime="2025-09-10 00:21:32.606501818 +0000 UTC m=+43.304621864" watchObservedRunningTime="2025-09-10 00:21:32.619881426 +0000 UTC m=+43.318001472" Sep 10 00:21:33.480188 systemd-networkd[1381]: cali29e85dc0d14: Gained IPv6LL Sep 10 00:21:33.601210 kubelet[2466]: I0910 00:21:33.601117 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:21:33.601575 kubelet[2466]: E0910 00:21:33.601402 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:33.736501 systemd-networkd[1381]: cali304f352a12a: Gained IPv6LL Sep 10 00:21:34.603756 kubelet[2466]: E0910 00:21:34.603724 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:21:34.781497 containerd[1438]: time="2025-09-10T00:21:34.781441143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:34.783157 containerd[1438]: time="2025-09-10T00:21:34.783123356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 00:21:34.783917 containerd[1438]: time="2025-09-10T00:21:34.783865500Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:34.786137 containerd[1438]: time="2025-09-10T00:21:34.786092291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:34.786958 containerd[1438]: time="2025-09-10T00:21:34.786712311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.56022621s" Sep 10 00:21:34.786958 containerd[1438]: time="2025-09-10T00:21:34.786857595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 00:21:34.792085 containerd[1438]: time="2025-09-10T00:21:34.790889884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 00:21:34.792273 containerd[1438]: time="2025-09-10T00:21:34.792244767Z" level=info msg="CreateContainer within sandbox \"9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:21:34.806960 containerd[1438]: time="2025-09-10T00:21:34.806924635Z" level=info msg="CreateContainer within sandbox \"9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"19d024b68c7c8cb83d0d3399a52e5c82406c4ea8cb043e97a8835cc326ed143d\"" Sep 10 00:21:34.807692 containerd[1438]: time="2025-09-10T00:21:34.807530094Z" level=info msg="StartContainer for \"19d024b68c7c8cb83d0d3399a52e5c82406c4ea8cb043e97a8835cc326ed143d\"" Sep 10 00:21:34.847196 systemd[1]: Started cri-containerd-19d024b68c7c8cb83d0d3399a52e5c82406c4ea8cb043e97a8835cc326ed143d.scope - libcontainer container 19d024b68c7c8cb83d0d3399a52e5c82406c4ea8cb043e97a8835cc326ed143d. Sep 10 00:21:34.891885 containerd[1438]: time="2025-09-10T00:21:34.891436209Z" level=info msg="StartContainer for \"19d024b68c7c8cb83d0d3399a52e5c82406c4ea8cb043e97a8835cc326ed143d\" returns successfully" Sep 10 00:21:35.207410 systemd[1]: Started sshd@8-10.0.0.141:22-10.0.0.1:44470.service - OpenSSH per-connection server daemon (10.0.0.1:44470). Sep 10 00:21:35.272930 sshd[5152]: Accepted publickey for core from 10.0.0.1 port 44470 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:35.274822 sshd[5152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:35.279693 systemd-logind[1422]: New session 9 of user core. Sep 10 00:21:35.289560 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:21:35.548690 sshd[5152]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:35.552923 systemd[1]: sshd@8-10.0.0.141:22-10.0.0.1:44470.service: Deactivated successfully. Sep 10 00:21:35.557029 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:21:35.561093 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:21:35.562513 systemd-logind[1422]: Removed session 9. Sep 10 00:21:36.426207 containerd[1438]: time="2025-09-10T00:21:36.426151252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:36.427695 containerd[1438]: time="2025-09-10T00:21:36.427493093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 00:21:36.428630 containerd[1438]: time="2025-09-10T00:21:36.428422882Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:36.430509 containerd[1438]: time="2025-09-10T00:21:36.430471624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:36.431476 containerd[1438]: time="2025-09-10T00:21:36.431219327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.640290362s" Sep 10 00:21:36.431476 containerd[1438]: time="2025-09-10T00:21:36.431251928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 00:21:36.432670 containerd[1438]: time="2025-09-10T00:21:36.432549447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:21:36.434118 containerd[1438]: time="2025-09-10T00:21:36.434085934Z" level=info msg="CreateContainer within sandbox \"f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 00:21:36.466879 containerd[1438]: time="2025-09-10T00:21:36.466825972Z" level=info msg="CreateContainer within sandbox \"f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f0e070a7cc4840daf1e7c095bf74bcd75b02f73da096b13dc5aad0ecffddd79d\"" Sep 10 00:21:36.467636 containerd[1438]: time="2025-09-10T00:21:36.467566355Z" level=info msg="StartContainer for \"f0e070a7cc4840daf1e7c095bf74bcd75b02f73da096b13dc5aad0ecffddd79d\"" Sep 10 00:21:36.503281 systemd[1]: Started cri-containerd-f0e070a7cc4840daf1e7c095bf74bcd75b02f73da096b13dc5aad0ecffddd79d.scope - libcontainer container f0e070a7cc4840daf1e7c095bf74bcd75b02f73da096b13dc5aad0ecffddd79d. Sep 10 00:21:36.529400 containerd[1438]: time="2025-09-10T00:21:36.529348838Z" level=info msg="StartContainer for \"f0e070a7cc4840daf1e7c095bf74bcd75b02f73da096b13dc5aad0ecffddd79d\" returns successfully" Sep 10 00:21:36.612100 kubelet[2466]: I0910 00:21:36.611688 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:21:36.828336 containerd[1438]: time="2025-09-10T00:21:36.828227067Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:36.830311 containerd[1438]: time="2025-09-10T00:21:36.829078253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 00:21:36.831303 containerd[1438]: time="2025-09-10T00:21:36.831123435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 398.534147ms" Sep 10 00:21:36.831303 containerd[1438]: time="2025-09-10T00:21:36.831168276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 00:21:36.832607 containerd[1438]: time="2025-09-10T00:21:36.832573119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 00:21:36.834621 containerd[1438]: time="2025-09-10T00:21:36.834588341Z" level=info msg="CreateContainer within sandbox \"b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:21:36.846194 containerd[1438]: time="2025-09-10T00:21:36.845994808Z" level=info msg="CreateContainer within sandbox \"b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b5011c65ba190b012fafbe427779e9039f56d24a391a027a87b958babba4f572\"" Sep 10 00:21:36.846831 containerd[1438]: time="2025-09-10T00:21:36.846742431Z" level=info msg="StartContainer for \"b5011c65ba190b012fafbe427779e9039f56d24a391a027a87b958babba4f572\"" Sep 10 00:21:36.885261 systemd[1]: Started cri-containerd-b5011c65ba190b012fafbe427779e9039f56d24a391a027a87b958babba4f572.scope - libcontainer container b5011c65ba190b012fafbe427779e9039f56d24a391a027a87b958babba4f572. Sep 10 00:21:36.918299 containerd[1438]: time="2025-09-10T00:21:36.918200249Z" level=info msg="StartContainer for \"b5011c65ba190b012fafbe427779e9039f56d24a391a027a87b958babba4f572\" returns successfully" Sep 10 00:21:37.001735 kubelet[2466]: I0910 00:21:37.001694 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:21:37.172393 kubelet[2466]: I0910 00:21:37.172350 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:21:37.173043 kubelet[2466]: I0910 00:21:37.172912 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b45cc59c-mrqjs" podStartSLOduration=28.429452474 podStartE2EDuration="33.172897903s" podCreationTimestamp="2025-09-10 00:21:04 +0000 UTC" firstStartedPulling="2025-09-10 00:21:30.045885685 +0000 UTC m=+40.744005691" lastFinishedPulling="2025-09-10 00:21:34.789331114 +0000 UTC m=+45.487451120" observedRunningTime="2025-09-10 00:21:35.638549083 +0000 UTC m=+46.336669129" watchObservedRunningTime="2025-09-10 00:21:37.172897903 +0000 UTC m=+47.871017909" Sep 10 00:21:37.629704 kubelet[2466]: I0910 00:21:37.629630 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b45cc59c-f9r2t" podStartSLOduration=28.621467303 podStartE2EDuration="33.629602012s" podCreationTimestamp="2025-09-10 00:21:04 +0000 UTC" firstStartedPulling="2025-09-10 00:21:31.823823151 +0000 UTC m=+42.521943197" lastFinishedPulling="2025-09-10 00:21:36.83195786 +0000 UTC m=+47.530077906" observedRunningTime="2025-09-10 00:21:37.627079937 +0000 UTC m=+48.325199983" watchObservedRunningTime="2025-09-10 00:21:37.629602012 +0000 UTC m=+48.327722058" Sep 10 00:21:38.617329 kubelet[2466]: I0910 00:21:38.617290 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:21:38.737734 containerd[1438]: time="2025-09-10T00:21:38.737166548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:38.738484 containerd[1438]: time="2025-09-10T00:21:38.738395344Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 00:21:38.739695 containerd[1438]: time="2025-09-10T00:21:38.739658261Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:38.742739 containerd[1438]: time="2025-09-10T00:21:38.742314819Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:38.743728 containerd[1438]: time="2025-09-10T00:21:38.743686819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.911071858s" Sep 10 00:21:38.743892 containerd[1438]: time="2025-09-10T00:21:38.743729860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 00:21:38.765656 containerd[1438]: time="2025-09-10T00:21:38.765611940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 00:21:38.779291 containerd[1438]: time="2025-09-10T00:21:38.779235739Z" level=info msg="CreateContainer within sandbox \"29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 00:21:38.797972 containerd[1438]: time="2025-09-10T00:21:38.797919725Z" level=info msg="CreateContainer within sandbox \"29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"eddcbb45d589b63cf3f40cecfa44ef37bf735ec760a3296f84828eca6e3b708e\"" Sep 10 00:21:38.800791 containerd[1438]: time="2025-09-10T00:21:38.800749248Z" level=info msg="StartContainer for \"eddcbb45d589b63cf3f40cecfa44ef37bf735ec760a3296f84828eca6e3b708e\"" Sep 10 00:21:38.834258 systemd[1]: Started cri-containerd-eddcbb45d589b63cf3f40cecfa44ef37bf735ec760a3296f84828eca6e3b708e.scope - libcontainer container eddcbb45d589b63cf3f40cecfa44ef37bf735ec760a3296f84828eca6e3b708e. Sep 10 00:21:38.877253 containerd[1438]: time="2025-09-10T00:21:38.876914316Z" level=info msg="StartContainer for \"eddcbb45d589b63cf3f40cecfa44ef37bf735ec760a3296f84828eca6e3b708e\" returns successfully" Sep 10 00:21:39.634613 kubelet[2466]: I0910 00:21:39.633660 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-844fff9f5c-zklz7" podStartSLOduration=24.745354617 podStartE2EDuration="31.633639736s" podCreationTimestamp="2025-09-10 00:21:08 +0000 UTC" firstStartedPulling="2025-09-10 00:21:31.875982182 +0000 UTC m=+42.574102228" lastFinishedPulling="2025-09-10 00:21:38.764267301 +0000 UTC m=+49.462387347" observedRunningTime="2025-09-10 00:21:39.633018199 +0000 UTC m=+50.331138245" watchObservedRunningTime="2025-09-10 00:21:39.633639736 +0000 UTC m=+50.331759742" Sep 10 00:21:40.173672 containerd[1438]: time="2025-09-10T00:21:40.173508337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:40.174483 containerd[1438]: time="2025-09-10T00:21:40.174446283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 00:21:40.179913 containerd[1438]: time="2025-09-10T00:21:40.179864556Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:40.182838 containerd[1438]: time="2025-09-10T00:21:40.182785718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:21:40.183656 containerd[1438]: time="2025-09-10T00:21:40.183499898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.417841797s" Sep 10 00:21:40.183656 containerd[1438]: time="2025-09-10T00:21:40.183532579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 00:21:40.188801 containerd[1438]: time="2025-09-10T00:21:40.188651283Z" level=info msg="CreateContainer within sandbox \"f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 00:21:40.203556 containerd[1438]: time="2025-09-10T00:21:40.203421739Z" level=info msg="CreateContainer within sandbox \"f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"349bf92bd4a50f1cb550d77b2f3cd6b3b998d8d3628d597a7024f8b8309e3348\"" Sep 10 00:21:40.205335 containerd[1438]: time="2025-09-10T00:21:40.205226190Z" level=info msg="StartContainer for \"349bf92bd4a50f1cb550d77b2f3cd6b3b998d8d3628d597a7024f8b8309e3348\"" Sep 10 00:21:40.245298 systemd[1]: Started cri-containerd-349bf92bd4a50f1cb550d77b2f3cd6b3b998d8d3628d597a7024f8b8309e3348.scope - libcontainer container 349bf92bd4a50f1cb550d77b2f3cd6b3b998d8d3628d597a7024f8b8309e3348. Sep 10 00:21:40.272988 containerd[1438]: time="2025-09-10T00:21:40.272910857Z" level=info msg="StartContainer for \"349bf92bd4a50f1cb550d77b2f3cd6b3b998d8d3628d597a7024f8b8309e3348\" returns successfully" Sep 10 00:21:40.454681 kubelet[2466]: I0910 00:21:40.454566 2466 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 00:21:40.459034 kubelet[2466]: I0910 00:21:40.458996 2466 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 00:21:40.566909 systemd[1]: Started sshd@9-10.0.0.141:22-10.0.0.1:53962.service - OpenSSH per-connection server daemon (10.0.0.1:53962). Sep 10 00:21:40.615697 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 53962 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:40.617488 sshd[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:40.621576 systemd-logind[1422]: New session 10 of user core. Sep 10 00:21:40.627275 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:21:40.637261 kubelet[2466]: I0910 00:21:40.637154 2466 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2dhv8" podStartSLOduration=22.551418227 podStartE2EDuration="32.637113876s" podCreationTimestamp="2025-09-10 00:21:08 +0000 UTC" firstStartedPulling="2025-09-10 00:21:30.0996075 +0000 UTC m=+40.797727546" lastFinishedPulling="2025-09-10 00:21:40.185303189 +0000 UTC m=+50.883423195" observedRunningTime="2025-09-10 00:21:40.636727745 +0000 UTC m=+51.334847791" watchObservedRunningTime="2025-09-10 00:21:40.637113876 +0000 UTC m=+51.335233922" Sep 10 00:21:41.051904 sshd[5460]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:41.069987 systemd[1]: sshd@9-10.0.0.141:22-10.0.0.1:53962.service: Deactivated successfully. Sep 10 00:21:41.073445 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:21:41.074993 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:21:41.086655 systemd[1]: Started sshd@10-10.0.0.141:22-10.0.0.1:53968.service - OpenSSH per-connection server daemon (10.0.0.1:53968). Sep 10 00:21:41.087744 systemd-logind[1422]: Removed session 10. Sep 10 00:21:41.121454 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 53968 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:41.122970 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:41.127141 systemd-logind[1422]: New session 11 of user core. Sep 10 00:21:41.135299 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:21:41.349192 sshd[5475]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:41.359757 systemd[1]: sshd@10-10.0.0.141:22-10.0.0.1:53968.service: Deactivated successfully. Sep 10 00:21:41.364542 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:21:41.370270 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:21:41.375437 systemd[1]: Started sshd@11-10.0.0.141:22-10.0.0.1:53978.service - OpenSSH per-connection server daemon (10.0.0.1:53978). Sep 10 00:21:41.378876 systemd-logind[1422]: Removed session 11. Sep 10 00:21:41.414257 sshd[5488]: Accepted publickey for core from 10.0.0.1 port 53978 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:41.415809 sshd[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:41.419720 systemd-logind[1422]: New session 12 of user core. Sep 10 00:21:41.430234 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:21:41.567173 sshd[5488]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:41.570736 systemd[1]: sshd@11-10.0.0.141:22-10.0.0.1:53978.service: Deactivated successfully. Sep 10 00:21:41.574491 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:21:41.575636 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:21:41.576577 systemd-logind[1422]: Removed session 12. Sep 10 00:21:42.548095 kubelet[2466]: I0910 00:21:42.547810 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:21:46.577726 systemd[1]: Started sshd@12-10.0.0.141:22-10.0.0.1:53984.service - OpenSSH per-connection server daemon (10.0.0.1:53984). Sep 10 00:21:46.616969 sshd[5512]: Accepted publickey for core from 10.0.0.1 port 53984 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:46.618197 sshd[5512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:46.621485 systemd-logind[1422]: New session 13 of user core. Sep 10 00:21:46.633169 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:21:46.778713 sshd[5512]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:46.790041 systemd[1]: sshd@12-10.0.0.141:22-10.0.0.1:53984.service: Deactivated successfully. Sep 10 00:21:46.791842 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:21:46.794801 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:21:46.800728 systemd[1]: Started sshd@13-10.0.0.141:22-10.0.0.1:53992.service - OpenSSH per-connection server daemon (10.0.0.1:53992). Sep 10 00:21:46.803502 systemd-logind[1422]: Removed session 13. Sep 10 00:21:46.840455 sshd[5526]: Accepted publickey for core from 10.0.0.1 port 53992 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:46.843322 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:46.849164 systemd-logind[1422]: New session 14 of user core. Sep 10 00:21:46.854187 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:21:47.054113 sshd[5526]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:47.069668 systemd[1]: sshd@13-10.0.0.141:22-10.0.0.1:53992.service: Deactivated successfully. Sep 10 00:21:47.071431 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:21:47.074159 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:21:47.075290 systemd[1]: Started sshd@14-10.0.0.141:22-10.0.0.1:54004.service - OpenSSH per-connection server daemon (10.0.0.1:54004). Sep 10 00:21:47.076033 systemd-logind[1422]: Removed session 14. Sep 10 00:21:47.115193 sshd[5538]: Accepted publickey for core from 10.0.0.1 port 54004 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:47.116674 sshd[5538]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:47.120568 systemd-logind[1422]: New session 15 of user core. Sep 10 00:21:47.129212 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:21:47.742970 sshd[5538]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:47.753882 systemd[1]: sshd@14-10.0.0.141:22-10.0.0.1:54004.service: Deactivated successfully. Sep 10 00:21:47.756505 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:21:47.758855 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:21:47.767724 systemd[1]: Started sshd@15-10.0.0.141:22-10.0.0.1:54020.service - OpenSSH per-connection server daemon (10.0.0.1:54020). Sep 10 00:21:47.769443 systemd-logind[1422]: Removed session 15. Sep 10 00:21:47.807583 sshd[5560]: Accepted publickey for core from 10.0.0.1 port 54020 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:47.809011 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:47.813139 systemd-logind[1422]: New session 16 of user core. Sep 10 00:21:47.822246 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:21:48.222890 sshd[5560]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:48.234131 systemd[1]: sshd@15-10.0.0.141:22-10.0.0.1:54020.service: Deactivated successfully. Sep 10 00:21:48.236404 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:21:48.240217 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:21:48.249373 systemd[1]: Started sshd@16-10.0.0.141:22-10.0.0.1:54036.service - OpenSSH per-connection server daemon (10.0.0.1:54036). Sep 10 00:21:48.250488 systemd-logind[1422]: Removed session 16. Sep 10 00:21:48.284801 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 54036 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:48.286696 sshd[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:48.290446 systemd-logind[1422]: New session 17 of user core. Sep 10 00:21:48.300242 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:21:48.434790 sshd[5574]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:48.437943 systemd[1]: sshd@16-10.0.0.141:22-10.0.0.1:54036.service: Deactivated successfully. Sep 10 00:21:48.440364 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:21:48.441925 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:21:48.443088 systemd-logind[1422]: Removed session 17. Sep 10 00:21:49.361362 containerd[1438]: time="2025-09-10T00:21:49.361312999Z" level=info msg="StopPodSandbox for \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\"" Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.415 [WARNING][5597] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2dhv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0d446ad-5074-4566-b66e-22a6ab7ca731", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e", Pod:"csi-node-driver-2dhv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9806c04a82e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.416 [INFO][5597] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.416 [INFO][5597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" iface="eth0" netns="" Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.416 [INFO][5597] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.416 [INFO][5597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.449 [INFO][5607] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" HandleID="k8s-pod-network.9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.449 [INFO][5607] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.449 [INFO][5607] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.459 [WARNING][5607] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" HandleID="k8s-pod-network.9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.459 [INFO][5607] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" HandleID="k8s-pod-network.9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.461 [INFO][5607] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:49.465426 containerd[1438]: 2025-09-10 00:21:49.463 [INFO][5597] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:49.465906 containerd[1438]: time="2025-09-10T00:21:49.465436453Z" level=info msg="TearDown network for sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\" successfully" Sep 10 00:21:49.465906 containerd[1438]: time="2025-09-10T00:21:49.465463894Z" level=info msg="StopPodSandbox for \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\" returns successfully" Sep 10 00:21:49.466130 containerd[1438]: time="2025-09-10T00:21:49.466041108Z" level=info msg="RemovePodSandbox for \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\"" Sep 10 00:21:49.471339 containerd[1438]: time="2025-09-10T00:21:49.471285998Z" level=info msg="Forcibly stopping sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\"" Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.503 [WARNING][5625] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2dhv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c0d446ad-5074-4566-b66e-22a6ab7ca731", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f1a23f462caa6ad5025b0a41bfa51fd24cf8c1a66837044bda17546f3fa74d9e", Pod:"csi-node-driver-2dhv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9806c04a82e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.503 [INFO][5625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.503 [INFO][5625] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" iface="eth0" netns="" Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.503 [INFO][5625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.503 [INFO][5625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.521 [INFO][5633] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" HandleID="k8s-pod-network.9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.521 [INFO][5633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.521 [INFO][5633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.530 [WARNING][5633] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" HandleID="k8s-pod-network.9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.530 [INFO][5633] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" HandleID="k8s-pod-network.9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Workload="localhost-k8s-csi--node--driver--2dhv8-eth0" Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.531 [INFO][5633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:49.535476 containerd[1438]: 2025-09-10 00:21:49.533 [INFO][5625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7" Sep 10 00:21:49.535893 containerd[1438]: time="2025-09-10T00:21:49.535518465Z" level=info msg="TearDown network for sandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\" successfully" Sep 10 00:21:49.554815 containerd[1438]: time="2025-09-10T00:21:49.554755501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:21:49.554910 containerd[1438]: time="2025-09-10T00:21:49.554849103Z" level=info msg="RemovePodSandbox \"9c3a4f8d38b160880187f29d64e32906ea5c974ebe91d355e250729cf6fd07c7\" returns successfully" Sep 10 00:21:49.555344 containerd[1438]: time="2025-09-10T00:21:49.555325155Z" level=info msg="StopPodSandbox for \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\"" Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.586 [WARNING][5651] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0", GenerateName:"calico-kube-controllers-844fff9f5c-", Namespace:"calico-system", SelfLink:"", UID:"85708529-bae5-41c2-aae8-a8dcb5a3de9c", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844fff9f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961", Pod:"calico-kube-controllers-844fff9f5c-zklz7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali304f352a12a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.586 [INFO][5651] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.586 [INFO][5651] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" iface="eth0" netns="" Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.586 [INFO][5651] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.586 [INFO][5651] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.604 [INFO][5660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" HandleID="k8s-pod-network.7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.604 [INFO][5660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.604 [INFO][5660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.612 [WARNING][5660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" HandleID="k8s-pod-network.7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.612 [INFO][5660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" HandleID="k8s-pod-network.7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.614 [INFO][5660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:49.617498 containerd[1438]: 2025-09-10 00:21:49.615 [INFO][5651] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:49.617978 containerd[1438]: time="2025-09-10T00:21:49.617537893Z" level=info msg="TearDown network for sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\" successfully" Sep 10 00:21:49.617978 containerd[1438]: time="2025-09-10T00:21:49.617566733Z" level=info msg="StopPodSandbox for \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\" returns successfully" Sep 10 00:21:49.618078 containerd[1438]: time="2025-09-10T00:21:49.618040785Z" level=info msg="RemovePodSandbox for \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\"" Sep 10 00:21:49.618114 containerd[1438]: time="2025-09-10T00:21:49.618086706Z" level=info msg="Forcibly stopping sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\"" Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.655 [WARNING][5678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0", GenerateName:"calico-kube-controllers-844fff9f5c-", Namespace:"calico-system", SelfLink:"", UID:"85708529-bae5-41c2-aae8-a8dcb5a3de9c", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"844fff9f5c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"29229c00cb85649cc2e5e498546d1cf3c005c172309400cfa1a979f120035961", Pod:"calico-kube-controllers-844fff9f5c-zklz7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali304f352a12a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.655 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.655 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" iface="eth0" netns="" Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.655 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.655 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.672 [INFO][5687] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" HandleID="k8s-pod-network.7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.672 [INFO][5687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.673 [INFO][5687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.680 [WARNING][5687] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" HandleID="k8s-pod-network.7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.680 [INFO][5687] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" HandleID="k8s-pod-network.7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Workload="localhost-k8s-calico--kube--controllers--844fff9f5c--zklz7-eth0" Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.682 [INFO][5687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:49.686353 containerd[1438]: 2025-09-10 00:21:49.684 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b" Sep 10 00:21:49.686797 containerd[1438]: time="2025-09-10T00:21:49.686385595Z" level=info msg="TearDown network for sandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\" successfully" Sep 10 00:21:49.689172 containerd[1438]: time="2025-09-10T00:21:49.689139863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:21:49.689219 containerd[1438]: time="2025-09-10T00:21:49.689199744Z" level=info msg="RemovePodSandbox \"7b3783705ae30f3f7af826b35f687440556cccc4cfba9da0c79498192cc4c27b\" returns successfully" Sep 10 00:21:49.689646 containerd[1438]: time="2025-09-10T00:21:49.689589994Z" level=info msg="StopPodSandbox for \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\"" Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.719 [WARNING][5707] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--x5mwf-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"375bc444-bbf5-4839-b395-b0f406ed06db", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f", Pod:"goldmane-54d579b49d-x5mwf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b28740e800", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.719 [INFO][5707] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.719 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" iface="eth0" netns="" Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.719 [INFO][5707] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.719 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.736 [INFO][5715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" HandleID="k8s-pod-network.e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.737 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.737 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.746 [WARNING][5715] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" HandleID="k8s-pod-network.e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.746 [INFO][5715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" HandleID="k8s-pod-network.e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.747 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:49.751096 containerd[1438]: 2025-09-10 00:21:49.749 [INFO][5707] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:49.751545 containerd[1438]: time="2025-09-10T00:21:49.751129915Z" level=info msg="TearDown network for sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\" successfully" Sep 10 00:21:49.751545 containerd[1438]: time="2025-09-10T00:21:49.751154076Z" level=info msg="StopPodSandbox for \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\" returns successfully" Sep 10 00:21:49.751985 containerd[1438]: time="2025-09-10T00:21:49.751958575Z" level=info msg="RemovePodSandbox for \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\"" Sep 10 00:21:49.752040 containerd[1438]: time="2025-09-10T00:21:49.751993056Z" level=info msg="Forcibly stopping sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\"" Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.782 [WARNING][5733] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--x5mwf-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"375bc444-bbf5-4839-b395-b0f406ed06db", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b2edb81a6a2682586bb3f06ac552ac5d8f548d1fed56326dbc51eefa5de8104f", Pod:"goldmane-54d579b49d-x5mwf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5b28740e800", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.782 [INFO][5733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.782 [INFO][5733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" iface="eth0" netns="" Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.782 [INFO][5733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.782 [INFO][5733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.800 [INFO][5743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" HandleID="k8s-pod-network.e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.800 [INFO][5743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.800 [INFO][5743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.808 [WARNING][5743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" HandleID="k8s-pod-network.e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.808 [INFO][5743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" HandleID="k8s-pod-network.e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Workload="localhost-k8s-goldmane--54d579b49d--x5mwf-eth0" Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.809 [INFO][5743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:49.813430 containerd[1438]: 2025-09-10 00:21:49.811 [INFO][5733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1" Sep 10 00:21:49.814998 containerd[1438]: time="2025-09-10T00:21:49.813500617Z" level=info msg="TearDown network for sandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\" successfully" Sep 10 00:21:49.819210 containerd[1438]: time="2025-09-10T00:21:49.819178237Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:21:49.819353 containerd[1438]: time="2025-09-10T00:21:49.819336561Z" level=info msg="RemovePodSandbox \"e84050e1615d173139a9ebae3bd4364b1e58c1ddca67c68532cfc08e170310a1\" returns successfully" Sep 10 00:21:49.819875 containerd[1438]: time="2025-09-10T00:21:49.819848734Z" level=info msg="StopPodSandbox for \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\"" Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.849 [WARNING][5762] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e347e663-fe05-4ba7-b6ca-d630372bf51c", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47", Pod:"coredns-668d6bf9bc-tlwmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia928239cd82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.849 [INFO][5762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.849 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" iface="eth0" netns="" Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.849 [INFO][5762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.849 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.866 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" HandleID="k8s-pod-network.9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.867 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.867 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.875 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" HandleID="k8s-pod-network.9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.875 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" HandleID="k8s-pod-network.9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.876 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:49.879933 containerd[1438]: 2025-09-10 00:21:49.878 [INFO][5762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:49.879933 containerd[1438]: time="2025-09-10T00:21:49.879913778Z" level=info msg="TearDown network for sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\" successfully" Sep 10 00:21:49.880538 containerd[1438]: time="2025-09-10T00:21:49.879937939Z" level=info msg="StopPodSandbox for \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\" returns successfully" Sep 10 00:21:49.881652 containerd[1438]: time="2025-09-10T00:21:49.881614620Z" level=info msg="RemovePodSandbox for \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\"" Sep 10 00:21:49.881704 containerd[1438]: time="2025-09-10T00:21:49.881655661Z" level=info msg="Forcibly stopping sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\"" Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.912 [WARNING][5788] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e347e663-fe05-4ba7-b6ca-d630372bf51c", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49e306579a119e0c061f3154bd757c222ca8491f19cdbc8cf715493085a0eb47", Pod:"coredns-668d6bf9bc-tlwmv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia928239cd82", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.913 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.913 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" iface="eth0" netns="" Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.913 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.913 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.929 [INFO][5797] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" HandleID="k8s-pod-network.9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.929 [INFO][5797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.929 [INFO][5797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.938 [WARNING][5797] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" HandleID="k8s-pod-network.9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.938 [INFO][5797] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" HandleID="k8s-pod-network.9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Workload="localhost-k8s-coredns--668d6bf9bc--tlwmv-eth0" Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.940 [INFO][5797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:49.943244 containerd[1438]: 2025-09-10 00:21:49.941 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d" Sep 10 00:21:49.943783 containerd[1438]: time="2025-09-10T00:21:49.943283785Z" level=info msg="TearDown network for sandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\" successfully" Sep 10 00:21:49.945964 containerd[1438]: time="2025-09-10T00:21:49.945936130Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:21:49.946026 containerd[1438]: time="2025-09-10T00:21:49.945998772Z" level=info msg="RemovePodSandbox \"9d7c2bbd76a7baf178a1ef96836083d9d02d06a67011e8d686cbc0b70914da6d\" returns successfully" Sep 10 00:21:49.946447 containerd[1438]: time="2025-09-10T00:21:49.946426702Z" level=info msg="StopPodSandbox for \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\"" Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:49.977 [WARNING][5814] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0", GenerateName:"calico-apiserver-9b45cc59c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f46fd77-2fed-494f-b2c2-daf32e135470", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b45cc59c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a", Pod:"calico-apiserver-9b45cc59c-mrqjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1bbc7ea5a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:49.977 [INFO][5814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:49.977 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" iface="eth0" netns="" Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:49.977 [INFO][5814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:49.977 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:49.993 [INFO][5824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" HandleID="k8s-pod-network.f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:49.994 [INFO][5824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:49.994 [INFO][5824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:50.002 [WARNING][5824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" HandleID="k8s-pod-network.f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:50.002 [INFO][5824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" HandleID="k8s-pod-network.f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:50.003 [INFO][5824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:50.006650 containerd[1438]: 2025-09-10 00:21:50.004 [INFO][5814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:50.007073 containerd[1438]: time="2025-09-10T00:21:50.006688430Z" level=info msg="TearDown network for sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\" successfully" Sep 10 00:21:50.007073 containerd[1438]: time="2025-09-10T00:21:50.006711791Z" level=info msg="StopPodSandbox for \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\" returns successfully" Sep 10 00:21:50.007122 containerd[1438]: time="2025-09-10T00:21:50.007107601Z" level=info msg="RemovePodSandbox for \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\"" Sep 10 00:21:50.007147 containerd[1438]: time="2025-09-10T00:21:50.007135881Z" level=info msg="Forcibly stopping sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\"" Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.043 [WARNING][5842] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0", GenerateName:"calico-apiserver-9b45cc59c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f46fd77-2fed-494f-b2c2-daf32e135470", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b45cc59c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9cdd8f6946ad7037d6fb5d9994e99bf3f82d62c1f173d6500190d1ba13c8ec5a", Pod:"calico-apiserver-9b45cc59c-mrqjs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie1bbc7ea5a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.043 [INFO][5842] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.043 [INFO][5842] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" iface="eth0" netns="" Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.043 [INFO][5842] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.043 [INFO][5842] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.060 [INFO][5851] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" HandleID="k8s-pod-network.f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.060 [INFO][5851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.060 [INFO][5851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.068 [WARNING][5851] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" HandleID="k8s-pod-network.f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.068 [INFO][5851] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" HandleID="k8s-pod-network.f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Workload="localhost-k8s-calico--apiserver--9b45cc59c--mrqjs-eth0" Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.069 [INFO][5851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:50.072862 containerd[1438]: 2025-09-10 00:21:50.070 [INFO][5842] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793" Sep 10 00:21:50.073262 containerd[1438]: time="2025-09-10T00:21:50.072919769Z" level=info msg="TearDown network for sandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\" successfully" Sep 10 00:21:50.075802 containerd[1438]: time="2025-09-10T00:21:50.075769319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:21:50.075846 containerd[1438]: time="2025-09-10T00:21:50.075836681Z" level=info msg="RemovePodSandbox \"f7158f2c9eb24446b1c4e733c6d6d9be77ba27d48d614a1a983d543d05c07793\" returns successfully" Sep 10 00:21:50.076393 containerd[1438]: time="2025-09-10T00:21:50.076360414Z" level=info msg="StopPodSandbox for \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\"" Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.106 [WARNING][5868] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0", GenerateName:"calico-apiserver-9b45cc59c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e6abdb0-8540-4e43-a17d-9abbd4d6ede7", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b45cc59c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb", Pod:"calico-apiserver-9b45cc59c-f9r2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29e85dc0d14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.106 [INFO][5868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.106 [INFO][5868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" iface="eth0" netns="" Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.106 [INFO][5868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.106 [INFO][5868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.123 [INFO][5877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" HandleID="k8s-pod-network.3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.123 [INFO][5877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.123 [INFO][5877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.133 [WARNING][5877] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" HandleID="k8s-pod-network.3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.133 [INFO][5877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" HandleID="k8s-pod-network.3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.134 [INFO][5877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:50.137697 containerd[1438]: 2025-09-10 00:21:50.136 [INFO][5868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:50.137697 containerd[1438]: time="2025-09-10T00:21:50.137677752Z" level=info msg="TearDown network for sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\" successfully" Sep 10 00:21:50.138153 containerd[1438]: time="2025-09-10T00:21:50.137702113Z" level=info msg="StopPodSandbox for \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\" returns successfully" Sep 10 00:21:50.138153 containerd[1438]: time="2025-09-10T00:21:50.138116563Z" level=info msg="RemovePodSandbox for \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\"" Sep 10 00:21:50.138153 containerd[1438]: time="2025-09-10T00:21:50.138143444Z" level=info msg="Forcibly stopping sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\"" Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.168 [WARNING][5895] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0", GenerateName:"calico-apiserver-9b45cc59c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4e6abdb0-8540-4e43-a17d-9abbd4d6ede7", ResourceVersion:"1160", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 21, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b45cc59c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1be704883440d1fb8f038909295165d3a3a95965068db4cb8b49043ef8574fb", Pod:"calico-apiserver-9b45cc59c-f9r2t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29e85dc0d14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.168 [INFO][5895] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.168 [INFO][5895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" iface="eth0" netns="" Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.168 [INFO][5895] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.168 [INFO][5895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.185 [INFO][5904] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" HandleID="k8s-pod-network.3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.185 [INFO][5904] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.185 [INFO][5904] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.193 [WARNING][5904] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" HandleID="k8s-pod-network.3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.193 [INFO][5904] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" HandleID="k8s-pod-network.3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Workload="localhost-k8s-calico--apiserver--9b45cc59c--f9r2t-eth0" Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.194 [INFO][5904] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:50.197541 containerd[1438]: 2025-09-10 00:21:50.195 [INFO][5895] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961" Sep 10 00:21:50.197928 containerd[1438]: time="2025-09-10T00:21:50.197571336Z" level=info msg="TearDown network for sandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\" successfully" Sep 10 00:21:50.200378 containerd[1438]: time="2025-09-10T00:21:50.200348924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:21:50.200435 containerd[1438]: time="2025-09-10T00:21:50.200411486Z" level=info msg="RemovePodSandbox \"3415a522bf403f7b623e432ade6908428cf7284c63dd50d707eb75500b1d8961\" returns successfully" Sep 10 00:21:50.201016 containerd[1438]: time="2025-09-10T00:21:50.200992420Z" level=info msg="StopPodSandbox for \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\"" Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.230 [WARNING][5922] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bbdc19e7-71b6-4a83-8e74-efca458fc0cb", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410", Pod:"coredns-668d6bf9bc-4mrqt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia79fdaf6e14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.230 [INFO][5922] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.230 [INFO][5922] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" iface="eth0" netns="" Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.230 [INFO][5922] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.230 [INFO][5922] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.249 [INFO][5930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" HandleID="k8s-pod-network.2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.249 [INFO][5930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.249 [INFO][5930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.258 [WARNING][5930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" HandleID="k8s-pod-network.2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.258 [INFO][5930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" HandleID="k8s-pod-network.2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.259 [INFO][5930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:50.262750 containerd[1438]: 2025-09-10 00:21:50.261 [INFO][5922] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:50.263301 containerd[1438]: time="2025-09-10T00:21:50.262784091Z" level=info msg="TearDown network for sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\" successfully" Sep 10 00:21:50.263301 containerd[1438]: time="2025-09-10T00:21:50.262807611Z" level=info msg="StopPodSandbox for \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\" returns successfully" Sep 10 00:21:50.263730 containerd[1438]: time="2025-09-10T00:21:50.263706513Z" level=info msg="RemovePodSandbox for \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\"" Sep 10 00:21:50.263767 containerd[1438]: time="2025-09-10T00:21:50.263737634Z" level=info msg="Forcibly stopping sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\"" Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.295 [WARNING][5949] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bbdc19e7-71b6-4a83-8e74-efca458fc0cb", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 20, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"69641fde2f39ec75c78543aac9da64bd19957cd7fc13921bff96b990b7f3a410", Pod:"coredns-668d6bf9bc-4mrqt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia79fdaf6e14", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.295 [INFO][5949] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.295 [INFO][5949] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" iface="eth0" netns="" Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.295 [INFO][5949] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.295 [INFO][5949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.313 [INFO][5958] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" HandleID="k8s-pod-network.2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.313 [INFO][5958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.313 [INFO][5958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.321 [WARNING][5958] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" HandleID="k8s-pod-network.2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.321 [INFO][5958] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" HandleID="k8s-pod-network.2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Workload="localhost-k8s-coredns--668d6bf9bc--4mrqt-eth0" Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.323 [INFO][5958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:50.326446 containerd[1438]: 2025-09-10 00:21:50.324 [INFO][5949] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5" Sep 10 00:21:50.326904 containerd[1438]: time="2025-09-10T00:21:50.326476288Z" level=info msg="TearDown network for sandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\" successfully" Sep 10 00:21:50.329090 containerd[1438]: time="2025-09-10T00:21:50.329046030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:21:50.329161 containerd[1438]: time="2025-09-10T00:21:50.329112232Z" level=info msg="RemovePodSandbox \"2b3cb53953d6839d4b0c466d8ac3baa8e175daad1af5fa7bddc310668939f4d5\" returns successfully" Sep 10 00:21:50.329546 containerd[1438]: time="2025-09-10T00:21:50.329512522Z" level=info msg="StopPodSandbox for \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\"" Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.358 [WARNING][5977] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" WorkloadEndpoint="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.359 [INFO][5977] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.359 [INFO][5977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" iface="eth0" netns="" Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.359 [INFO][5977] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.359 [INFO][5977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.379 [INFO][5985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" HandleID="k8s-pod-network.6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Workload="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.379 [INFO][5985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.379 [INFO][5985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.386 [WARNING][5985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" HandleID="k8s-pod-network.6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Workload="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.386 [INFO][5985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" HandleID="k8s-pod-network.6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Workload="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.387 [INFO][5985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:50.390833 containerd[1438]: 2025-09-10 00:21:50.389 [INFO][5977] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:50.390833 containerd[1438]: time="2025-09-10T00:21:50.390783860Z" level=info msg="TearDown network for sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\" successfully" Sep 10 00:21:50.390833 containerd[1438]: time="2025-09-10T00:21:50.390807300Z" level=info msg="StopPodSandbox for \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\" returns successfully" Sep 10 00:21:50.391905 containerd[1438]: time="2025-09-10T00:21:50.391193990Z" level=info msg="RemovePodSandbox for \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\"" Sep 10 00:21:50.391905 containerd[1438]: time="2025-09-10T00:21:50.391223830Z" level=info msg="Forcibly stopping sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\"" Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.421 [WARNING][6003] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" WorkloadEndpoint="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.421 [INFO][6003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.421 [INFO][6003] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" iface="eth0" netns="" Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.421 [INFO][6003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.421 [INFO][6003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.438 [INFO][6012] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" HandleID="k8s-pod-network.6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Workload="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.439 [INFO][6012] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.439 [INFO][6012] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.448 [WARNING][6012] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" HandleID="k8s-pod-network.6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Workload="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.448 [INFO][6012] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" HandleID="k8s-pod-network.6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Workload="localhost-k8s-whisker--566897bdb5--x2xlw-eth0" Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.449 [INFO][6012] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:21:50.453891 containerd[1438]: 2025-09-10 00:21:50.450 [INFO][6003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe" Sep 10 00:21:50.453891 containerd[1438]: time="2025-09-10T00:21:50.452536129Z" level=info msg="TearDown network for sandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\" successfully" Sep 10 00:21:50.455677 containerd[1438]: time="2025-09-10T00:21:50.455645685Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:21:50.455790 containerd[1438]: time="2025-09-10T00:21:50.455774248Z" level=info msg="RemovePodSandbox \"6483bbbd660c413277fc5aa87cfbb23f52fb1b0f4c6c228cdb04f55d4b893dbe\" returns successfully" Sep 10 00:21:53.448874 systemd[1]: Started sshd@17-10.0.0.141:22-10.0.0.1:60638.service - OpenSSH per-connection server daemon (10.0.0.1:60638). Sep 10 00:21:53.490029 sshd[6026]: Accepted publickey for core from 10.0.0.1 port 60638 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:53.491531 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:53.495773 systemd-logind[1422]: New session 18 of user core. Sep 10 00:21:53.504231 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:21:53.718168 sshd[6026]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:53.721559 systemd[1]: sshd@17-10.0.0.141:22-10.0.0.1:60638.service: Deactivated successfully. Sep 10 00:21:53.723624 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:21:53.724358 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:21:53.725094 systemd-logind[1422]: Removed session 18. Sep 10 00:21:58.728504 systemd[1]: Started sshd@18-10.0.0.141:22-10.0.0.1:60644.service - OpenSSH per-connection server daemon (10.0.0.1:60644). Sep 10 00:21:58.765574 sshd[6043]: Accepted publickey for core from 10.0.0.1 port 60644 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:21:58.766896 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:21:58.769971 systemd-logind[1422]: New session 19 of user core. Sep 10 00:21:58.778186 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:21:58.943683 sshd[6043]: pam_unix(sshd:session): session closed for user core Sep 10 00:21:58.947473 systemd[1]: sshd@18-10.0.0.141:22-10.0.0.1:60644.service: Deactivated successfully. Sep 10 00:21:58.951540 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:21:58.954904 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:21:58.957105 systemd-logind[1422]: Removed session 19. Sep 10 00:22:01.392524 kubelet[2466]: E0910 00:22:01.392485 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:22:02.391292 kubelet[2466]: E0910 00:22:02.391231 2466 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:22:02.640255 kubelet[2466]: I0910 00:22:02.640073 2466 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:22:03.957463 systemd[1]: Started sshd@19-10.0.0.141:22-10.0.0.1:39474.service - OpenSSH per-connection server daemon (10.0.0.1:39474). Sep 10 00:22:04.001711 sshd[6066]: Accepted publickey for core from 10.0.0.1 port 39474 ssh2: RSA SHA256:lHdvGEK4DxF99fwbUmGy8qRWzrbraZK2zPV76HHbn/o Sep 10 00:22:04.003014 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:22:04.006594 systemd-logind[1422]: New session 20 of user core. Sep 10 00:22:04.021168 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:22:04.135991 sshd[6066]: pam_unix(sshd:session): session closed for user core Sep 10 00:22:04.140453 systemd[1]: sshd@19-10.0.0.141:22-10.0.0.1:39474.service: Deactivated successfully. Sep 10 00:22:04.142286 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:22:04.142830 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:22:04.143565 systemd-logind[1422]: Removed session 20.