Sep 9 00:08:53.854501 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 00:08:53.854522 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon Sep 8 22:48:00 -00 2025 Sep 9 00:08:53.854531 kernel: KASLR enabled Sep 9 00:08:53.854537 kernel: efi: EFI v2.7 by EDK II Sep 9 00:08:53.854543 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 9 00:08:53.854562 kernel: random: crng init done Sep 9 00:08:53.854569 kernel: ACPI: Early table checksum verification disabled Sep 9 00:08:53.854575 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 9 00:08:53.854581 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:08:53.854588 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:08:53.854594 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:08:53.854601 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:08:53.854606 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:08:53.854613 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:08:53.854620 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:08:53.854628 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:08:53.854635 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:08:53.854641 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:08:53.854648 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 00:08:53.854654 kernel: NUMA: Failed to initialise from firmware Sep 9 00:08:53.854662 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:08:53.854668 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 9 00:08:53.854674 kernel: Zone ranges: Sep 9 00:08:53.854681 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:08:53.854687 kernel: DMA32 empty Sep 9 00:08:53.854695 kernel: Normal empty Sep 9 00:08:53.854701 kernel: Movable zone start for each node Sep 9 00:08:53.854715 kernel: Early memory node ranges Sep 9 00:08:53.854722 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 9 00:08:53.854728 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 9 00:08:53.854735 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 9 00:08:53.854741 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 00:08:53.854747 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 00:08:53.854754 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 00:08:53.854760 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 00:08:53.854766 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 00:08:53.854772 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 00:08:53.854781 kernel: psci: probing for conduit method from ACPI. Sep 9 00:08:53.854787 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 00:08:53.854794 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 00:08:53.854803 kernel: psci: Trusted OS migration not required Sep 9 00:08:53.854809 kernel: psci: SMC Calling Convention v1.1 Sep 9 00:08:53.854817 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 00:08:53.854825 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 9 00:08:53.854832 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 9 00:08:53.854839 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 00:08:53.854846 kernel: Detected PIPT I-cache on CPU0 Sep 9 00:08:53.854853 kernel: CPU features: detected: GIC system register CPU interface Sep 9 00:08:53.854860 kernel: CPU features: detected: Hardware dirty bit management Sep 9 00:08:53.854867 kernel: CPU features: detected: Spectre-v4 Sep 9 00:08:53.854874 kernel: CPU features: detected: Spectre-BHB Sep 9 00:08:53.854881 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 00:08:53.854888 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 00:08:53.854896 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 00:08:53.854903 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 00:08:53.854910 kernel: alternatives: applying boot alternatives Sep 9 00:08:53.854918 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7395fe4f9fb368b2829f9349e2a89e9a9e96b552675d3b261a5a30cf3c6cb15c Sep 9 00:08:53.854925 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:08:53.854932 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:08:53.854940 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:08:53.854946 kernel: Fallback order for Node 0: 0 Sep 9 00:08:53.854953 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 9 00:08:53.854960 kernel: Policy zone: DMA Sep 9 00:08:53.854967 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:08:53.854975 kernel: software IO TLB: area num 4. Sep 9 00:08:53.854982 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 9 00:08:53.854990 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 9 00:08:53.854996 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:08:53.855003 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:08:53.855011 kernel: rcu: RCU event tracing is enabled. Sep 9 00:08:53.855018 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:08:53.855025 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:08:53.855031 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:08:53.855038 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:08:53.855046 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:08:53.855054 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 00:08:53.855061 kernel: GICv3: 256 SPIs implemented Sep 9 00:08:53.855068 kernel: GICv3: 0 Extended SPIs implemented Sep 9 00:08:53.855075 kernel: Root IRQ handler: gic_handle_irq Sep 9 00:08:53.855082 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 00:08:53.855089 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 00:08:53.855096 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 00:08:53.855118 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 9 00:08:53.855125 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 9 00:08:53.855132 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 9 00:08:53.855139 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 9 00:08:53.855146 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:08:53.855156 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:08:53.855162 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 00:08:53.855169 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 00:08:53.855176 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 00:08:53.855183 kernel: arm-pv: using stolen time PV Sep 9 00:08:53.855190 kernel: Console: colour dummy device 80x25 Sep 9 00:08:53.855198 kernel: ACPI: Core revision 20230628 Sep 9 00:08:53.855205 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 00:08:53.855213 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:08:53.855220 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:08:53.855228 kernel: landlock: Up and running. Sep 9 00:08:53.855236 kernel: SELinux: Initializing. Sep 9 00:08:53.855243 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:08:53.855250 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:08:53.855257 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:08:53.855265 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:08:53.855272 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:08:53.855279 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:08:53.855286 kernel: Platform MSI: ITS@0x8080000 domain created Sep 9 00:08:53.855294 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 9 00:08:53.855301 kernel: Remapping and enabling EFI services. Sep 9 00:08:53.855308 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:08:53.855315 kernel: Detected PIPT I-cache on CPU1 Sep 9 00:08:53.855322 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 00:08:53.855329 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 9 00:08:53.855336 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:08:53.855343 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 00:08:53.855351 kernel: Detected PIPT I-cache on CPU2 Sep 9 00:08:53.855358 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 00:08:53.855366 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 9 00:08:53.855374 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:08:53.855385 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 00:08:53.855394 kernel: Detected PIPT I-cache on CPU3 Sep 9 00:08:53.855401 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 00:08:53.855409 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 9 00:08:53.855416 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 00:08:53.855424 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 00:08:53.855431 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:08:53.855440 kernel: SMP: Total of 4 processors activated. Sep 9 00:08:53.855447 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 00:08:53.855455 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 00:08:53.855462 kernel: CPU features: detected: Common not Private translations Sep 9 00:08:53.855470 kernel: CPU features: detected: CRC32 instructions Sep 9 00:08:53.855477 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 00:08:53.855485 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 00:08:53.855492 kernel: CPU features: detected: LSE atomic instructions Sep 9 00:08:53.855500 kernel: CPU features: detected: Privileged Access Never Sep 9 00:08:53.855508 kernel: CPU features: detected: RAS Extension Support Sep 9 00:08:53.855515 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 00:08:53.855522 kernel: CPU: All CPU(s) started at EL1 Sep 9 00:08:53.855530 kernel: alternatives: applying system-wide alternatives Sep 9 00:08:53.855537 kernel: devtmpfs: initialized Sep 9 00:08:53.855544 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:08:53.855552 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:08:53.855559 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:08:53.855568 kernel: SMBIOS 3.0.0 present. Sep 9 00:08:53.855575 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 9 00:08:53.855582 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:08:53.855590 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 00:08:53.855597 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 00:08:53.855604 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 00:08:53.855612 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:08:53.855619 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 9 00:08:53.855626 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:08:53.855635 kernel: cpuidle: using governor menu Sep 9 00:08:53.855642 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 00:08:53.855649 kernel: ASID allocator initialised with 32768 entries Sep 9 00:08:53.855657 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:08:53.855664 kernel: Serial: AMBA PL011 UART driver Sep 9 00:08:53.855671 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 00:08:53.855679 kernel: Modules: 0 pages in range for non-PLT usage Sep 9 00:08:53.855687 kernel: Modules: 509008 pages in range for PLT usage Sep 9 00:08:53.855695 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:08:53.855704 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:08:53.855716 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 00:08:53.855724 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 00:08:53.855731 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:08:53.855739 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:08:53.855746 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 00:08:53.855753 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 00:08:53.855760 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:08:53.855768 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:08:53.855777 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:08:53.855785 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:08:53.855792 kernel: ACPI: Interpreter enabled Sep 9 00:08:53.855799 kernel: ACPI: Using GIC for interrupt routing Sep 9 00:08:53.855807 kernel: ACPI: MCFG table detected, 1 entries Sep 9 00:08:53.855814 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 00:08:53.855822 kernel: printk: console [ttyAMA0] enabled Sep 9 00:08:53.855830 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:08:53.855962 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:08:53.856042 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 00:08:53.856153 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 00:08:53.856227 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 00:08:53.856297 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 00:08:53.856306 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 00:08:53.856314 kernel: PCI host bridge to bus 0000:00 Sep 9 00:08:53.856388 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 00:08:53.856454 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 00:08:53.856516 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 00:08:53.856575 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:08:53.856657 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 9 00:08:53.856745 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:08:53.856819 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 9 00:08:53.856892 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 9 00:08:53.856978 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:08:53.857049 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 00:08:53.857128 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 9 00:08:53.857197 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 9 00:08:53.857260 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 00:08:53.857320 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 00:08:53.857384 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 00:08:53.857394 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 00:08:53.857402 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 00:08:53.857410 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 00:08:53.857417 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 00:08:53.857425 kernel: iommu: Default domain type: Translated Sep 9 00:08:53.857432 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 00:08:53.857440 kernel: efivars: Registered efivars operations Sep 9 00:08:53.857447 kernel: vgaarb: loaded Sep 9 00:08:53.857457 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 00:08:53.857464 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:08:53.857472 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:08:53.857480 kernel: pnp: PnP ACPI init Sep 9 00:08:53.857558 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 00:08:53.857568 kernel: pnp: PnP ACPI: found 1 devices Sep 9 00:08:53.857576 kernel: NET: Registered PF_INET protocol family Sep 9 00:08:53.857583 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:08:53.857593 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:08:53.857600 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:08:53.857607 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:08:53.857615 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:08:53.857622 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:08:53.857634 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:08:53.857642 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:08:53.857649 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:08:53.857656 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:08:53.857665 kernel: kvm [1]: HYP mode not available Sep 9 00:08:53.857672 kernel: Initialise system trusted keyrings Sep 9 00:08:53.857680 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:08:53.857687 kernel: Key type asymmetric registered Sep 9 00:08:53.857694 kernel: Asymmetric key parser 'x509' registered Sep 9 00:08:53.857702 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:08:53.857717 kernel: io scheduler mq-deadline registered Sep 9 00:08:53.857725 kernel: io scheduler kyber registered Sep 9 00:08:53.857732 kernel: io scheduler bfq registered Sep 9 00:08:53.857742 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 00:08:53.857749 kernel: ACPI: button: Power Button [PWRB] Sep 9 00:08:53.857757 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 00:08:53.857829 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 00:08:53.857839 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:08:53.857847 kernel: thunder_xcv, ver 1.0 Sep 9 00:08:53.857854 kernel: thunder_bgx, ver 1.0 Sep 9 00:08:53.857862 kernel: nicpf, ver 1.0 Sep 9 00:08:53.857870 kernel: nicvf, ver 1.0 Sep 9 00:08:53.857951 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 00:08:53.858020 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T00:08:53 UTC (1757376533) Sep 9 00:08:53.858030 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 00:08:53.858038 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 9 00:08:53.858046 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 9 00:08:53.858053 kernel: watchdog: Hard watchdog permanently disabled Sep 9 00:08:53.858061 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:08:53.858068 kernel: Segment Routing with IPv6 Sep 9 00:08:53.858078 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:08:53.858085 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:08:53.858092 kernel: Key type dns_resolver registered Sep 9 00:08:53.858153 kernel: registered taskstats version 1 Sep 9 00:08:53.858163 kernel: Loading compiled-in X.509 certificates Sep 9 00:08:53.858171 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: f5b097e6797722e0cc665195a3c415b6be267631' Sep 9 00:08:53.858179 kernel: Key type .fscrypt registered Sep 9 00:08:53.858187 kernel: Key type fscrypt-provisioning registered Sep 9 00:08:53.858195 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:08:53.858205 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:08:53.858213 kernel: ima: No architecture policies found Sep 9 00:08:53.858220 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 00:08:53.858228 kernel: clk: Disabling unused clocks Sep 9 00:08:53.858235 kernel: Freeing unused kernel memory: 39424K Sep 9 00:08:53.858242 kernel: Run /init as init process Sep 9 00:08:53.858249 kernel: with arguments: Sep 9 00:08:53.858256 kernel: /init Sep 9 00:08:53.858263 kernel: with environment: Sep 9 00:08:53.858272 kernel: HOME=/ Sep 9 00:08:53.858279 kernel: TERM=linux Sep 9 00:08:53.858286 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:08:53.858295 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:08:53.858304 systemd[1]: Detected virtualization kvm. Sep 9 00:08:53.858312 systemd[1]: Detected architecture arm64. Sep 9 00:08:53.858321 systemd[1]: Running in initrd. Sep 9 00:08:53.858330 systemd[1]: No hostname configured, using default hostname. Sep 9 00:08:53.858338 systemd[1]: Hostname set to . Sep 9 00:08:53.858346 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:08:53.858353 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:08:53.858361 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:08:53.858369 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:08:53.858377 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:08:53.858385 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:08:53.858395 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:08:53.858403 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:08:53.858413 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:08:53.858421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:08:53.858429 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:08:53.858437 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:08:53.858445 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:08:53.858455 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:08:53.858463 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:08:53.858471 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:08:53.858478 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:08:53.858487 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:08:53.858495 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:08:53.858503 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:08:53.858511 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:08:53.858518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:08:53.858528 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:08:53.858536 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:08:53.858544 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:08:53.858552 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:08:53.858560 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:08:53.858568 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:08:53.858576 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:08:53.858584 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:08:53.858593 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:08:53.858601 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:08:53.858609 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:08:53.858616 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:08:53.858625 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:08:53.858651 systemd-journald[237]: Collecting audit messages is disabled. Sep 9 00:08:53.858670 systemd-journald[237]: Journal started Sep 9 00:08:53.858690 systemd-journald[237]: Runtime Journal (/run/log/journal/574d8b5f317140cab22bde9e262f59c2) is 5.9M, max 47.3M, 41.4M free. Sep 9 00:08:53.849280 systemd-modules-load[239]: Inserted module 'overlay' Sep 9 00:08:53.860304 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:08:53.861604 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:08:53.866137 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:08:53.866156 kernel: Bridge firewalling registered Sep 9 00:08:53.864602 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 9 00:08:53.865391 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:08:53.868360 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:08:53.883232 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:08:53.884984 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:08:53.887071 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:08:53.891257 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:08:53.898354 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:08:53.899817 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:08:53.902296 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:08:53.905482 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:08:53.912219 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:08:53.914529 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:08:53.926115 dracut-cmdline[278]: dracut-dracut-053 Sep 9 00:08:53.926938 dracut-cmdline[278]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=7395fe4f9fb368b2829f9349e2a89e9a9e96b552675d3b261a5a30cf3c6cb15c Sep 9 00:08:53.941717 systemd-resolved[281]: Positive Trust Anchors: Sep 9 00:08:53.941735 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:08:53.941766 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:08:53.946387 systemd-resolved[281]: Defaulting to hostname 'linux'. Sep 9 00:08:53.947340 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:08:53.950511 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:08:53.988130 kernel: SCSI subsystem initialized Sep 9 00:08:53.992120 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:08:54.001146 kernel: iscsi: registered transport (tcp) Sep 9 00:08:54.012355 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:08:54.012375 kernel: QLogic iSCSI HBA Driver Sep 9 00:08:54.053220 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:08:54.063240 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:08:54.079009 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:08:54.079048 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:08:54.079059 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:08:54.124127 kernel: raid6: neonx8 gen() 15720 MB/s Sep 9 00:08:54.141115 kernel: raid6: neonx4 gen() 15612 MB/s Sep 9 00:08:54.158123 kernel: raid6: neonx2 gen() 13168 MB/s Sep 9 00:08:54.175119 kernel: raid6: neonx1 gen() 10467 MB/s Sep 9 00:08:54.192116 kernel: raid6: int64x8 gen() 6915 MB/s Sep 9 00:08:54.209117 kernel: raid6: int64x4 gen() 7296 MB/s Sep 9 00:08:54.226120 kernel: raid6: int64x2 gen() 6102 MB/s Sep 9 00:08:54.243132 kernel: raid6: int64x1 gen() 5041 MB/s Sep 9 00:08:54.243165 kernel: raid6: using algorithm neonx8 gen() 15720 MB/s Sep 9 00:08:54.260128 kernel: raid6: .... xor() 12044 MB/s, rmw enabled Sep 9 00:08:54.260144 kernel: raid6: using neon recovery algorithm Sep 9 00:08:54.265205 kernel: xor: measuring software checksum speed Sep 9 00:08:54.265226 kernel: 8regs : 18964 MB/sec Sep 9 00:08:54.266301 kernel: 32regs : 19200 MB/sec Sep 9 00:08:54.266319 kernel: arm64_neon : 27132 MB/sec Sep 9 00:08:54.266336 kernel: xor: using function: arm64_neon (27132 MB/sec) Sep 9 00:08:54.315128 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:08:54.325560 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:08:54.334289 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:08:54.346423 systemd-udevd[464]: Using default interface naming scheme 'v255'. Sep 9 00:08:54.349616 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:08:54.359259 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:08:54.370216 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation Sep 9 00:08:54.397171 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:08:54.408246 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:08:54.451768 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:08:54.462534 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:08:54.475652 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:08:54.478797 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:08:54.481743 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:08:54.483133 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:08:54.489237 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:08:54.500858 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 00:08:54.501011 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:08:54.510917 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:08:54.510961 kernel: GPT:9289727 != 19775487 Sep 9 00:08:54.510973 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:08:54.510982 kernel: GPT:9289727 != 19775487 Sep 9 00:08:54.510991 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:08:54.511009 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:08:54.505395 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:08:54.514701 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:08:54.514870 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:08:54.518490 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:08:54.519647 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:08:54.519911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:08:54.524847 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:08:54.529755 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (515) Sep 9 00:08:54.536132 kernel: BTRFS: device fsid 7c1eef97-905d-47ac-bb4a-010204f95541 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (508) Sep 9 00:08:54.538350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:08:54.545271 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:08:54.553148 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:08:54.561342 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:08:54.565852 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:08:54.570030 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:08:54.571281 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:08:54.588278 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:08:54.590072 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:08:54.595677 disk-uuid[552]: Primary Header is updated. Sep 9 00:08:54.595677 disk-uuid[552]: Secondary Entries is updated. Sep 9 00:08:54.595677 disk-uuid[552]: Secondary Header is updated. Sep 9 00:08:54.600123 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:08:54.603120 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:08:54.607490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:08:54.610782 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:08:55.605122 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:08:55.606186 disk-uuid[553]: The operation has completed successfully. Sep 9 00:08:55.623367 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:08:55.623462 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:08:55.650279 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:08:55.653091 sh[579]: Success Sep 9 00:08:55.663153 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 9 00:08:55.687930 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:08:55.702394 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:08:55.705759 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:08:55.713528 kernel: BTRFS info (device dm-0): first mount of filesystem 7c1eef97-905d-47ac-bb4a-010204f95541 Sep 9 00:08:55.713564 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:08:55.713576 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:08:55.715613 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:08:55.715631 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:08:55.719195 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:08:55.720520 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:08:55.732258 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:08:55.733770 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:08:55.740120 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:08:55.740158 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:08:55.740168 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:08:55.743137 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:08:55.749932 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:08:55.752111 kernel: BTRFS info (device vda6): last unmount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:08:55.757689 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:08:55.764309 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:08:55.830598 ignition[666]: Ignition 2.19.0 Sep 9 00:08:55.830609 ignition[666]: Stage: fetch-offline Sep 9 00:08:55.830645 ignition[666]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:08:55.832982 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:08:55.830653 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:08:55.830806 ignition[666]: parsed url from cmdline: "" Sep 9 00:08:55.830812 ignition[666]: no config URL provided Sep 9 00:08:55.830817 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:08:55.830824 ignition[666]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:08:55.830846 ignition[666]: op(1): [started] loading QEMU firmware config module Sep 9 00:08:55.845225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:08:55.830851 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:08:55.839242 ignition[666]: op(1): [finished] loading QEMU firmware config module Sep 9 00:08:55.866091 systemd-networkd[769]: lo: Link UP Sep 9 00:08:55.866113 systemd-networkd[769]: lo: Gained carrier Sep 9 00:08:55.867120 systemd-networkd[769]: Enumeration completed Sep 9 00:08:55.867201 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:08:55.867667 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:08:55.867670 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:08:55.868821 systemd-networkd[769]: eth0: Link UP Sep 9 00:08:55.868824 systemd-networkd[769]: eth0: Gained carrier Sep 9 00:08:55.868831 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:08:55.868981 systemd[1]: Reached target network.target - Network. Sep 9 00:08:55.896155 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:08:55.898604 ignition[666]: parsing config with SHA512: ce92276df47eeb612acf38ab4b1ace23b52ff1155e96162504916e338263a55ad69b479980c3cd1225c7ceaa1eb728bfd029e7c06b504533ef24f1b6e69d39ca Sep 9 00:08:55.904706 unknown[666]: fetched base config from "system" Sep 9 00:08:55.904718 unknown[666]: fetched user config from "qemu" Sep 9 00:08:55.905205 ignition[666]: fetch-offline: fetch-offline passed Sep 9 00:08:55.907354 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:08:55.905270 ignition[666]: Ignition finished successfully Sep 9 00:08:55.908739 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:08:55.923302 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:08:55.933638 ignition[773]: Ignition 2.19.0 Sep 9 00:08:55.933647 ignition[773]: Stage: kargs Sep 9 00:08:55.933827 ignition[773]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:08:55.933838 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:08:55.934763 ignition[773]: kargs: kargs passed Sep 9 00:08:55.938525 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:08:55.934809 ignition[773]: Ignition finished successfully Sep 9 00:08:55.954305 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:08:55.963815 ignition[781]: Ignition 2.19.0 Sep 9 00:08:55.963824 ignition[781]: Stage: disks Sep 9 00:08:55.963978 ignition[781]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:08:55.963988 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:08:55.964959 ignition[781]: disks: disks passed Sep 9 00:08:55.966963 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:08:55.965006 ignition[781]: Ignition finished successfully Sep 9 00:08:55.969004 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:08:55.970366 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:08:55.972242 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:08:55.973618 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:08:55.975528 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:08:55.989258 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:08:55.998872 systemd-fsck[793]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:08:56.002389 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:08:56.006274 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:08:56.050516 kernel: EXT4-fs (vda9): mounted filesystem d987a4c8-1278-4a59-9d40-0c91e08e9423 r/w with ordered data mode. Quota mode: none. Sep 9 00:08:56.049036 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:08:56.050371 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:08:56.058232 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:08:56.059910 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:08:56.061396 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:08:56.061435 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:08:56.067635 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (801) Sep 9 00:08:56.067658 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:08:56.061455 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:08:56.071624 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:08:56.071645 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:08:56.065786 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:08:56.071346 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:08:56.076125 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:08:56.076823 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:08:56.105473 initrd-setup-root[825]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:08:56.109816 initrd-setup-root[832]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:08:56.113940 initrd-setup-root[839]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:08:56.117726 initrd-setup-root[846]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:08:56.181390 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:08:56.192202 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:08:56.194453 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:08:56.199114 kernel: BTRFS info (device vda6): last unmount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:08:56.212323 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:08:56.216414 ignition[915]: INFO : Ignition 2.19.0 Sep 9 00:08:56.216414 ignition[915]: INFO : Stage: mount Sep 9 00:08:56.217921 ignition[915]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:08:56.217921 ignition[915]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:08:56.217921 ignition[915]: INFO : mount: mount passed Sep 9 00:08:56.217921 ignition[915]: INFO : Ignition finished successfully Sep 9 00:08:56.221132 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:08:56.231216 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:08:56.712886 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:08:56.725281 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:08:56.731241 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (927) Sep 9 00:08:56.731284 kernel: BTRFS info (device vda6): first mount of filesystem 995cc93a-6fc6-4281-a722-821717f17817 Sep 9 00:08:56.731295 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 00:08:56.732470 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:08:56.734119 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:08:56.735530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:08:56.752798 ignition[944]: INFO : Ignition 2.19.0 Sep 9 00:08:56.752798 ignition[944]: INFO : Stage: files Sep 9 00:08:56.754684 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:08:56.754684 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:08:56.754684 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:08:56.758202 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:08:56.758202 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:08:56.758202 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:08:56.758202 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:08:56.758202 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:08:56.757151 unknown[944]: wrote ssh authorized keys file for user: core Sep 9 00:08:56.766084 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 9 00:08:56.766084 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 9 00:08:56.766084 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 00:08:56.766084 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 9 00:08:56.840806 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 00:08:57.018820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 00:08:57.018820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:08:57.018820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:08:57.018820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:08:57.018820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:08:57.018820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:08:57.018820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:08:57.018820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:08:57.018820 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:08:57.018153 systemd-networkd[769]: eth0: Gained IPv6LL Sep 9 00:08:57.036096 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:08:57.036096 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:08:57.036096 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:08:57.036096 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:08:57.036096 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:08:57.036096 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 9 00:08:57.488191 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 00:08:58.238592 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 00:08:58.238592 ignition[944]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 9 00:08:58.243096 ignition[944]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 9 00:08:58.243096 ignition[944]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 9 00:08:58.243096 ignition[944]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 9 00:08:58.243096 ignition[944]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 9 00:08:58.243096 ignition[944]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:08:58.243096 ignition[944]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:08:58.243096 ignition[944]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 9 00:08:58.243096 ignition[944]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Sep 9 00:08:58.243096 ignition[944]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:08:58.243096 ignition[944]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:08:58.261363 ignition[944]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Sep 9 00:08:58.261363 ignition[944]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:08:58.268098 ignition[944]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:08:58.272948 ignition[944]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:08:58.274374 ignition[944]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:08:58.274374 ignition[944]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:08:58.274374 ignition[944]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:08:58.274374 ignition[944]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:08:58.281654 ignition[944]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:08:58.281654 ignition[944]: INFO : files: files passed Sep 9 00:08:58.281654 ignition[944]: INFO : Ignition finished successfully Sep 9 00:08:58.276676 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:08:58.290621 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:08:58.293201 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:08:58.294876 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:08:58.294956 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:08:58.301032 initrd-setup-root-after-ignition[973]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:08:58.303940 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:08:58.303940 initrd-setup-root-after-ignition[975]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:08:58.307225 initrd-setup-root-after-ignition[979]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:08:58.308052 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:08:58.311536 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:08:58.317229 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:08:58.336402 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:08:58.336499 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:08:58.338680 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:08:58.339487 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:08:58.341442 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:08:58.342249 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:08:58.359652 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:08:58.363056 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:08:58.378633 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:08:58.379905 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:08:58.382020 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:08:58.383869 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:08:58.383978 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:08:58.386522 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:08:58.388506 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:08:58.390094 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:08:58.391630 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:08:58.393598 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:08:58.395462 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:08:58.397335 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:08:58.399219 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:08:58.401036 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:08:58.402833 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:08:58.404344 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:08:58.404455 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:08:58.406587 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:08:58.408559 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:08:58.410355 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:08:58.410463 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:08:58.412501 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:08:58.412598 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:08:58.415497 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:08:58.415645 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:08:58.417616 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:08:58.418963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:08:58.422179 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:08:58.423866 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:08:58.425824 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:08:58.427431 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:08:58.427558 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:08:58.429120 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:08:58.429260 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:08:58.430904 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:08:58.431048 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:08:58.432728 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:08:58.432869 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:08:58.443374 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:08:58.444944 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:08:58.445743 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:08:58.445911 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:08:58.447882 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:08:58.448026 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:08:58.455400 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:08:58.455500 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:08:58.462803 ignition[999]: INFO : Ignition 2.19.0 Sep 9 00:08:58.462803 ignition[999]: INFO : Stage: umount Sep 9 00:08:58.464875 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:08:58.464875 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:08:58.464875 ignition[999]: INFO : umount: umount passed Sep 9 00:08:58.464875 ignition[999]: INFO : Ignition finished successfully Sep 9 00:08:58.463057 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:08:58.467846 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:08:58.467945 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:08:58.469187 systemd[1]: Stopped target network.target - Network. Sep 9 00:08:58.473681 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:08:58.473756 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:08:58.474816 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:08:58.474857 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:08:58.476489 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:08:58.476532 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:08:58.478243 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:08:58.478286 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:08:58.480158 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:08:58.481895 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:08:58.485164 systemd-networkd[769]: eth0: DHCPv6 lease lost Sep 9 00:08:58.486725 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:08:58.486833 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:08:58.488507 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:08:58.488537 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:08:58.496185 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:08:58.497034 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:08:58.497088 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:08:58.499219 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:08:58.503424 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:08:58.503512 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:08:58.507041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:08:58.507161 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:08:58.508452 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:08:58.508495 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:08:58.510142 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:08:58.510187 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:08:58.512610 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:08:58.512707 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:08:58.515320 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:08:58.515444 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:08:58.518262 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:08:58.518326 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:08:58.520494 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:08:58.520531 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:08:58.522212 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:08:58.522262 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:08:58.524635 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:08:58.524679 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:08:58.527526 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:08:58.527568 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:08:58.535235 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:08:58.536891 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:08:58.536964 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:08:58.539280 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 00:08:58.539324 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:08:58.541269 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:08:58.541310 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:08:58.543369 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:08:58.543411 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:08:58.545821 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:08:58.547132 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:08:58.548926 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:08:58.548999 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:08:58.551299 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:08:58.552566 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:08:58.552634 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:08:58.569261 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:08:58.574967 systemd[1]: Switching root. Sep 9 00:08:58.596744 systemd-journald[237]: Journal stopped Sep 9 00:08:59.305046 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 9 00:08:59.305116 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:08:59.305130 kernel: SELinux: policy capability open_perms=1 Sep 9 00:08:59.305140 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:08:59.305150 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:08:59.305165 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:08:59.305175 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:08:59.305185 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:08:59.305198 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:08:59.305207 kernel: audit: type=1403 audit(1757376538.777:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:08:59.305218 systemd[1]: Successfully loaded SELinux policy in 31.214ms. Sep 9 00:08:59.305234 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.144ms. Sep 9 00:08:59.305245 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:08:59.305257 systemd[1]: Detected virtualization kvm. Sep 9 00:08:59.305267 systemd[1]: Detected architecture arm64. Sep 9 00:08:59.305277 systemd[1]: Detected first boot. Sep 9 00:08:59.305287 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:08:59.305300 zram_generator::config[1065]: No configuration found. Sep 9 00:08:59.305312 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:08:59.305323 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:08:59.305333 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:08:59.305344 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:08:59.305355 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:08:59.305365 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:08:59.305375 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:08:59.305387 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:08:59.305398 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:08:59.305409 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:08:59.305419 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:08:59.305430 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:08:59.305442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:08:59.305453 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:08:59.305463 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:08:59.305474 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:08:59.305486 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:08:59.305496 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 00:08:59.305507 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:08:59.305517 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:08:59.305528 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:08:59.305538 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:08:59.305551 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:08:59.305563 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:08:59.305574 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:08:59.305585 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:08:59.305596 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:08:59.305606 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:08:59.305617 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:08:59.305628 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:08:59.305644 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:08:59.305655 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:08:59.305670 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:08:59.305688 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:08:59.305701 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:08:59.305712 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:08:59.305722 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:08:59.305733 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:08:59.305744 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:08:59.305755 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:08:59.305766 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:08:59.305777 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:08:59.305789 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:08:59.305800 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:08:59.305811 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:08:59.305821 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:08:59.305832 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:08:59.305843 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:08:59.305854 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 9 00:08:59.305866 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 9 00:08:59.305878 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:08:59.305889 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:08:59.305900 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:08:59.305911 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:08:59.305921 kernel: fuse: init (API version 7.39) Sep 9 00:08:59.305952 systemd-journald[1143]: Collecting audit messages is disabled. Sep 9 00:08:59.305974 systemd-journald[1143]: Journal started Sep 9 00:08:59.305997 systemd-journald[1143]: Runtime Journal (/run/log/journal/574d8b5f317140cab22bde9e262f59c2) is 5.9M, max 47.3M, 41.4M free. Sep 9 00:08:59.311438 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:08:59.311487 kernel: ACPI: bus type drm_connector registered Sep 9 00:08:59.314841 kernel: loop: module loaded Sep 9 00:08:59.318870 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:08:59.321071 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:08:59.322419 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:08:59.323519 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:08:59.325396 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:08:59.327078 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:08:59.328521 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:08:59.329918 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:08:59.331392 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:08:59.331556 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:08:59.333026 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:08:59.334483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:08:59.334632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:08:59.336228 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:08:59.336383 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:08:59.337888 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:08:59.338040 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:08:59.339505 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:08:59.339658 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:08:59.341070 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:08:59.341283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:08:59.342742 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:08:59.344362 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:08:59.346137 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:08:59.357935 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:08:59.367229 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:08:59.369444 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:08:59.370648 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:08:59.372832 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:08:59.374975 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:08:59.376007 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:08:59.377256 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:08:59.378375 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:08:59.380262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:08:59.383097 systemd-journald[1143]: Time spent on flushing to /var/log/journal/574d8b5f317140cab22bde9e262f59c2 is 17.385ms for 845 entries. Sep 9 00:08:59.383097 systemd-journald[1143]: System Journal (/var/log/journal/574d8b5f317140cab22bde9e262f59c2) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:08:59.405764 systemd-journald[1143]: Received client request to flush runtime journal. Sep 9 00:08:59.384531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:08:59.387074 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:08:59.388563 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:08:59.390145 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:08:59.393581 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:08:59.395493 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:08:59.398633 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:08:59.410660 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:08:59.412516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:08:59.415875 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Sep 9 00:08:59.415890 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Sep 9 00:08:59.416817 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:08:59.420154 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:08:59.436326 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:08:59.454787 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:08:59.464272 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:08:59.477617 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 9 00:08:59.477635 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Sep 9 00:08:59.481450 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:08:59.801709 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:08:59.814334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:08:59.833190 systemd-udevd[1224]: Using default interface naming scheme 'v255'. Sep 9 00:08:59.846895 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:08:59.856324 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:08:59.874254 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:08:59.881126 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1238) Sep 9 00:08:59.886864 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 9 00:08:59.922605 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:08:59.928620 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:08:59.969723 systemd-networkd[1232]: lo: Link UP Sep 9 00:08:59.969735 systemd-networkd[1232]: lo: Gained carrier Sep 9 00:08:59.970425 systemd-networkd[1232]: Enumeration completed Sep 9 00:08:59.970894 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:08:59.970897 systemd-networkd[1232]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:08:59.971311 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:08:59.971477 systemd-networkd[1232]: eth0: Link UP Sep 9 00:08:59.971484 systemd-networkd[1232]: eth0: Gained carrier Sep 9 00:08:59.971496 systemd-networkd[1232]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:08:59.972753 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:08:59.975596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:08:59.984226 systemd-networkd[1232]: eth0: DHCPv4 address 10.0.0.10/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:08:59.985131 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:08:59.991247 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:08:59.999359 lvm[1263]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:09:00.006081 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:09:00.041655 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:09:00.043198 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:09:00.059229 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:09:00.062810 lvm[1271]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:09:00.095551 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:09:00.097041 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:09:00.098438 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:09:00.098471 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:09:00.099520 systemd[1]: Reached target machines.target - Containers. Sep 9 00:09:00.101506 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 00:09:00.121264 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:09:00.123642 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:09:00.124862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:09:00.125746 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:09:00.127874 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 00:09:00.130358 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:09:00.132976 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:09:00.139645 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:09:00.141124 kernel: loop0: detected capacity change from 0 to 114432 Sep 9 00:09:00.149435 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:09:00.151145 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:09:00.150789 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 00:09:00.193125 kernel: loop1: detected capacity change from 0 to 203944 Sep 9 00:09:00.232133 kernel: loop2: detected capacity change from 0 to 114328 Sep 9 00:09:00.271129 kernel: loop3: detected capacity change from 0 to 114432 Sep 9 00:09:00.276136 kernel: loop4: detected capacity change from 0 to 203944 Sep 9 00:09:00.282126 kernel: loop5: detected capacity change from 0 to 114328 Sep 9 00:09:00.286464 (sd-merge)[1293]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:09:00.286891 (sd-merge)[1293]: Merged extensions into '/usr'. Sep 9 00:09:00.290803 systemd[1]: Reloading requested from client PID 1279 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:09:00.290822 systemd[1]: Reloading... Sep 9 00:09:00.334173 zram_generator::config[1322]: No configuration found. Sep 9 00:09:00.372519 ldconfig[1276]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:09:00.425298 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:09:00.467862 systemd[1]: Reloading finished in 176 ms. Sep 9 00:09:00.482849 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:09:00.484359 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:09:00.501253 systemd[1]: Starting ensure-sysext.service... Sep 9 00:09:00.503120 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:09:00.507895 systemd[1]: Reloading requested from client PID 1363 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:09:00.507912 systemd[1]: Reloading... Sep 9 00:09:00.520190 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:09:00.520448 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:09:00.521140 systemd-tmpfiles[1364]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:09:00.521356 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Sep 9 00:09:00.521408 systemd-tmpfiles[1364]: ACLs are not supported, ignoring. Sep 9 00:09:00.523866 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:09:00.523880 systemd-tmpfiles[1364]: Skipping /boot Sep 9 00:09:00.531166 systemd-tmpfiles[1364]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:09:00.531180 systemd-tmpfiles[1364]: Skipping /boot Sep 9 00:09:00.541132 zram_generator::config[1393]: No configuration found. Sep 9 00:09:00.636923 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:09:00.680223 systemd[1]: Reloading finished in 172 ms. Sep 9 00:09:00.695919 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:09:00.719536 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:09:00.722080 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:09:00.724445 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:09:00.727330 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:09:00.730363 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:09:00.733283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:09:00.735977 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:09:00.738314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:09:00.743782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:09:00.745428 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:09:00.746452 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:09:00.746594 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:09:00.754038 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:09:00.755860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:09:00.756013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:09:00.757814 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:09:00.759755 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:09:00.765230 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:09:00.773465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:09:00.776329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:09:00.780330 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:09:00.780420 augenrules[1472]: No rules Sep 9 00:09:00.782346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:09:00.784671 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:09:00.787158 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:09:00.789003 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:09:00.790885 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:09:00.792700 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:09:00.792841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:09:00.794725 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:09:00.794875 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:09:00.798926 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:09:00.800617 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:09:00.800841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:09:00.807876 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:09:00.808331 systemd-resolved[1439]: Positive Trust Anchors: Sep 9 00:09:00.808348 systemd-resolved[1439]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:09:00.808380 systemd-resolved[1439]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:09:00.811317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:09:00.813403 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:09:00.813983 systemd-resolved[1439]: Defaulting to hostname 'linux'. Sep 9 00:09:00.816323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:09:00.819319 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:09:00.820626 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:09:00.820766 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:09:00.821404 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:09:00.823210 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:09:00.823357 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:09:00.824995 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:09:00.825155 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:09:00.826632 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:09:00.826790 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:09:00.828363 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:09:00.828552 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:09:00.831216 systemd[1]: Finished ensure-sysext.service. Sep 9 00:09:00.836335 systemd[1]: Reached target network.target - Network. Sep 9 00:09:00.837333 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:09:00.838532 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:09:00.838695 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:09:00.854279 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:09:00.894434 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:09:00.895193 systemd-timesyncd[1507]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:09:00.895240 systemd-timesyncd[1507]: Initial clock synchronization to Tue 2025-09-09 00:09:00.548819 UTC. Sep 9 00:09:00.896394 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:09:00.897522 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:09:00.898778 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:09:00.900018 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:09:00.901301 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:09:00.901337 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:09:00.902213 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:09:00.903304 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:09:00.904306 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:09:00.905368 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:09:00.906873 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:09:00.909121 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:09:00.911191 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:09:00.920074 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:09:00.921071 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:09:00.921992 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:09:00.923094 systemd[1]: System is tainted: cgroupsv1 Sep 9 00:09:00.923149 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:09:00.923171 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:09:00.924196 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:09:00.926147 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:09:00.927987 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:09:00.932307 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:09:00.933365 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:09:00.934360 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:09:00.935626 jq[1513]: false Sep 9 00:09:00.937589 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:09:00.942453 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:09:00.949253 dbus-daemon[1512]: [system] SELinux support is enabled Sep 9 00:09:00.951733 extend-filesystems[1515]: Found loop3 Sep 9 00:09:00.951733 extend-filesystems[1515]: Found loop4 Sep 9 00:09:00.953416 extend-filesystems[1515]: Found loop5 Sep 9 00:09:00.953416 extend-filesystems[1515]: Found vda Sep 9 00:09:00.953416 extend-filesystems[1515]: Found vda1 Sep 9 00:09:00.953416 extend-filesystems[1515]: Found vda2 Sep 9 00:09:00.953416 extend-filesystems[1515]: Found vda3 Sep 9 00:09:00.953416 extend-filesystems[1515]: Found usr Sep 9 00:09:00.953416 extend-filesystems[1515]: Found vda4 Sep 9 00:09:00.953416 extend-filesystems[1515]: Found vda6 Sep 9 00:09:00.953416 extend-filesystems[1515]: Found vda7 Sep 9 00:09:00.953416 extend-filesystems[1515]: Found vda9 Sep 9 00:09:00.953416 extend-filesystems[1515]: Checking size of /dev/vda9 Sep 9 00:09:00.953250 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:09:00.956373 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:09:00.961471 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:09:00.962548 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:09:00.965149 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:09:00.967018 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:09:00.976779 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1230) Sep 9 00:09:00.979857 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:09:00.980094 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:09:00.980360 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:09:00.980569 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:09:00.982381 jq[1534]: true Sep 9 00:09:00.990000 extend-filesystems[1515]: Resized partition /dev/vda9 Sep 9 00:09:00.992132 extend-filesystems[1543]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:09:01.000291 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:09:00.994542 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:09:00.994791 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:09:01.008769 update_engine[1533]: I20250909 00:09:01.008574 1533 main.cc:92] Flatcar Update Engine starting Sep 9 00:09:01.012820 update_engine[1533]: I20250909 00:09:01.011636 1533 update_check_scheduler.cc:74] Next update check in 5m54s Sep 9 00:09:01.016151 jq[1545]: true Sep 9 00:09:01.014757 (ntainerd)[1552]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:09:01.023412 tar[1542]: linux-arm64/helm Sep 9 00:09:01.029689 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:09:01.029224 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:09:01.032393 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:09:01.032424 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:09:01.034275 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:09:01.034293 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:09:01.036339 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:09:01.042229 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:09:01.042229 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:09:01.042229 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:09:01.041439 systemd-logind[1530]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 00:09:01.055509 extend-filesystems[1515]: Resized filesystem in /dev/vda9 Sep 9 00:09:01.041734 systemd-logind[1530]: New seat seat0. Sep 9 00:09:01.042336 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:09:01.048369 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:09:01.050541 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:09:01.050747 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:09:01.059884 bash[1575]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:09:01.062112 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:09:01.071469 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:09:01.085412 locksmithd[1566]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:09:01.159783 containerd[1552]: time="2025-09-09T00:09:01.159651812Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 00:09:01.186844 containerd[1552]: time="2025-09-09T00:09:01.186795271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:09:01.188133 containerd[1552]: time="2025-09-09T00:09:01.188088599Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:09:01.188133 containerd[1552]: time="2025-09-09T00:09:01.188130771Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:09:01.188230 containerd[1552]: time="2025-09-09T00:09:01.188145925Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:09:01.188757 containerd[1552]: time="2025-09-09T00:09:01.188310098Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:09:01.188757 containerd[1552]: time="2025-09-09T00:09:01.188333212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:09:01.188757 containerd[1552]: time="2025-09-09T00:09:01.188385640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:09:01.188757 containerd[1552]: time="2025-09-09T00:09:01.188397427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:09:01.188757 containerd[1552]: time="2025-09-09T00:09:01.188579547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:09:01.188757 containerd[1552]: time="2025-09-09T00:09:01.188595391Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:09:01.188757 containerd[1552]: time="2025-09-09T00:09:01.188607483Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:09:01.188757 containerd[1552]: time="2025-09-09T00:09:01.188616362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:09:01.188757 containerd[1552]: time="2025-09-09T00:09:01.188678051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:09:01.189516 containerd[1552]: time="2025-09-09T00:09:01.189481616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:09:01.189721 containerd[1552]: time="2025-09-09T00:09:01.189692247Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:09:01.189721 containerd[1552]: time="2025-09-09T00:09:01.189716547Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:09:01.189845 containerd[1552]: time="2025-09-09T00:09:01.189804489Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:09:01.189873 containerd[1552]: time="2025-09-09T00:09:01.189853434Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:09:01.192973 containerd[1552]: time="2025-09-09T00:09:01.192731505Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:09:01.192973 containerd[1552]: time="2025-09-09T00:09:01.192775476Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:09:01.192973 containerd[1552]: time="2025-09-09T00:09:01.192788870Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:09:01.192973 containerd[1552]: time="2025-09-09T00:09:01.192802455Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:09:01.192973 containerd[1552]: time="2025-09-09T00:09:01.192816691Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:09:01.192973 containerd[1552]: time="2025-09-09T00:09:01.192943819Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193235082Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193348281Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193371012Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193383182Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193395772Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193407712Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193419001Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193432051Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193447435Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193459757Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193470587Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193481647Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193500514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.193864 containerd[1552]: time="2025-09-09T00:09:01.193513334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193524317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193537672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193549459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193561629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193578964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193592205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193603839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193616850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193627757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193637975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193653435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193667518Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193686423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193701615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194141 containerd[1552]: time="2025-09-09T00:09:01.193711986Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:09:01.194380 containerd[1552]: time="2025-09-09T00:09:01.193809992Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:09:01.194380 containerd[1552]: time="2025-09-09T00:09:01.193826983Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:09:01.194380 containerd[1552]: time="2025-09-09T00:09:01.193836665Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:09:01.194380 containerd[1552]: time="2025-09-09T00:09:01.193847457Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:09:01.194380 containerd[1552]: time="2025-09-09T00:09:01.193856297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194380 containerd[1552]: time="2025-09-09T00:09:01.193868007Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:09:01.194380 containerd[1552]: time="2025-09-09T00:09:01.193884386Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:09:01.194380 containerd[1552]: time="2025-09-09T00:09:01.193897283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:09:01.194507 containerd[1552]: time="2025-09-09T00:09:01.194241280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:09:01.194507 containerd[1552]: time="2025-09-09T00:09:01.194297688Z" level=info msg="Connect containerd service" Sep 9 00:09:01.194507 containerd[1552]: time="2025-09-09T00:09:01.194387619Z" level=info msg="using legacy CRI server" Sep 9 00:09:01.194507 containerd[1552]: time="2025-09-09T00:09:01.194394087Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:09:01.194507 containerd[1552]: time="2025-09-09T00:09:01.194469973Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:09:01.195877 containerd[1552]: time="2025-09-09T00:09:01.195126089Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:09:01.195877 containerd[1552]: time="2025-09-09T00:09:01.195546318Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:09:01.195877 containerd[1552]: time="2025-09-09T00:09:01.195569049Z" level=info msg="Start subscribing containerd event" Sep 9 00:09:01.195877 containerd[1552]: time="2025-09-09T00:09:01.195615469Z" level=info msg="Start recovering state" Sep 9 00:09:01.195877 containerd[1552]: time="2025-09-09T00:09:01.195686764Z" level=info msg="Start event monitor" Sep 9 00:09:01.195877 containerd[1552]: time="2025-09-09T00:09:01.195699316Z" level=info msg="Start snapshots syncer" Sep 9 00:09:01.195877 containerd[1552]: time="2025-09-09T00:09:01.195709878Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:09:01.195877 containerd[1552]: time="2025-09-09T00:09:01.195717685Z" level=info msg="Start streaming server" Sep 9 00:09:01.195877 containerd[1552]: time="2025-09-09T00:09:01.195585466Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:09:01.195877 containerd[1552]: time="2025-09-09T00:09:01.195856753Z" level=info msg="containerd successfully booted in 0.037097s" Sep 9 00:09:01.195956 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:09:01.211226 sshd_keygen[1544]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:09:01.229279 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:09:01.237377 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:09:01.242542 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:09:01.242765 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:09:01.245353 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:09:01.255856 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:09:01.268346 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:09:01.271360 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 00:09:01.272615 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:09:01.377255 tar[1542]: linux-arm64/LICENSE Sep 9 00:09:01.377353 tar[1542]: linux-arm64/README.md Sep 9 00:09:01.395319 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:09:01.815224 systemd-networkd[1232]: eth0: Gained IPv6LL Sep 9 00:09:01.817549 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:09:01.819276 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:09:01.830361 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:09:01.832508 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:09:01.834516 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:09:01.848989 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:09:01.849217 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:09:01.851414 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:09:01.852878 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:09:02.357224 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:09:02.358782 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:09:02.361681 (kubelet)[1649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:09:02.363844 systemd[1]: Startup finished in 5.601s (kernel) + 3.617s (userspace) = 9.219s. Sep 9 00:09:02.732629 kubelet[1649]: E0909 00:09:02.732553 1649 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:09:02.734930 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:09:02.735137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:09:06.193645 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:09:06.207318 systemd[1]: Started sshd@0-10.0.0.10:22-10.0.0.1:48554.service - OpenSSH per-connection server daemon (10.0.0.1:48554). Sep 9 00:09:06.246535 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 48554 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:09:06.248225 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:09:06.256668 systemd-logind[1530]: New session 1 of user core. Sep 9 00:09:06.257525 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:09:06.269309 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:09:06.278478 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:09:06.281379 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:09:06.287677 (systemd)[1668]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:09:06.358986 systemd[1668]: Queued start job for default target default.target. Sep 9 00:09:06.359360 systemd[1668]: Created slice app.slice - User Application Slice. Sep 9 00:09:06.359394 systemd[1668]: Reached target paths.target - Paths. Sep 9 00:09:06.359406 systemd[1668]: Reached target timers.target - Timers. Sep 9 00:09:06.369192 systemd[1668]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:09:06.374554 systemd[1668]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:09:06.374604 systemd[1668]: Reached target sockets.target - Sockets. Sep 9 00:09:06.374615 systemd[1668]: Reached target basic.target - Basic System. Sep 9 00:09:06.374647 systemd[1668]: Reached target default.target - Main User Target. Sep 9 00:09:06.374669 systemd[1668]: Startup finished in 82ms. Sep 9 00:09:06.375071 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:09:06.376331 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:09:06.433344 systemd[1]: Started sshd@1-10.0.0.10:22-10.0.0.1:48562.service - OpenSSH per-connection server daemon (10.0.0.1:48562). Sep 9 00:09:06.464772 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 48562 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:09:06.465911 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:09:06.469843 systemd-logind[1530]: New session 2 of user core. Sep 9 00:09:06.480351 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:09:06.530873 sshd[1680]: pam_unix(sshd:session): session closed for user core Sep 9 00:09:06.545391 systemd[1]: Started sshd@2-10.0.0.10:22-10.0.0.1:48568.service - OpenSSH per-connection server daemon (10.0.0.1:48568). Sep 9 00:09:06.545768 systemd[1]: sshd@1-10.0.0.10:22-10.0.0.1:48562.service: Deactivated successfully. Sep 9 00:09:06.548209 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:09:06.548762 systemd-logind[1530]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:09:06.549783 systemd-logind[1530]: Removed session 2. Sep 9 00:09:06.583253 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 48568 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:09:06.584496 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:09:06.588143 systemd-logind[1530]: New session 3 of user core. Sep 9 00:09:06.605351 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:09:06.655747 sshd[1685]: pam_unix(sshd:session): session closed for user core Sep 9 00:09:06.668363 systemd[1]: Started sshd@3-10.0.0.10:22-10.0.0.1:48580.service - OpenSSH per-connection server daemon (10.0.0.1:48580). Sep 9 00:09:06.668845 systemd[1]: sshd@2-10.0.0.10:22-10.0.0.1:48568.service: Deactivated successfully. Sep 9 00:09:06.670148 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:09:06.674385 systemd-logind[1530]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:09:06.675546 systemd-logind[1530]: Removed session 3. Sep 9 00:09:06.699844 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 48580 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:09:06.701066 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:09:06.706093 systemd-logind[1530]: New session 4 of user core. Sep 9 00:09:06.720369 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:09:06.772403 sshd[1693]: pam_unix(sshd:session): session closed for user core Sep 9 00:09:06.783342 systemd[1]: Started sshd@4-10.0.0.10:22-10.0.0.1:48590.service - OpenSSH per-connection server daemon (10.0.0.1:48590). Sep 9 00:09:06.783706 systemd[1]: sshd@3-10.0.0.10:22-10.0.0.1:48580.service: Deactivated successfully. Sep 9 00:09:06.787133 systemd-logind[1530]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:09:06.787599 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:09:06.788741 systemd-logind[1530]: Removed session 4. Sep 9 00:09:06.817390 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 48590 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:09:06.818538 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:09:06.822178 systemd-logind[1530]: New session 5 of user core. Sep 9 00:09:06.834354 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:09:06.892781 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:09:06.893047 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:09:06.906770 sudo[1708]: pam_unix(sudo:session): session closed for user root Sep 9 00:09:06.908383 sshd[1701]: pam_unix(sshd:session): session closed for user core Sep 9 00:09:06.924329 systemd[1]: Started sshd@5-10.0.0.10:22-10.0.0.1:48596.service - OpenSSH per-connection server daemon (10.0.0.1:48596). Sep 9 00:09:06.924789 systemd[1]: sshd@4-10.0.0.10:22-10.0.0.1:48590.service: Deactivated successfully. Sep 9 00:09:06.926011 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:09:06.927570 systemd-logind[1530]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:09:06.928477 systemd-logind[1530]: Removed session 5. Sep 9 00:09:06.963284 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 48596 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:09:06.964495 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:09:06.967839 systemd-logind[1530]: New session 6 of user core. Sep 9 00:09:06.978344 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:09:07.028667 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:09:07.028937 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:09:07.032035 sudo[1718]: pam_unix(sudo:session): session closed for user root Sep 9 00:09:07.036069 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 00:09:07.036445 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:09:07.059411 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 00:09:07.061319 auditctl[1721]: No rules Sep 9 00:09:07.062172 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:09:07.062409 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 00:09:07.064395 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:09:07.089725 augenrules[1740]: No rules Sep 9 00:09:07.090386 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:09:07.091303 sudo[1717]: pam_unix(sudo:session): session closed for user root Sep 9 00:09:07.093284 sshd[1710]: pam_unix(sshd:session): session closed for user core Sep 9 00:09:07.108368 systemd[1]: Started sshd@6-10.0.0.10:22-10.0.0.1:48600.service - OpenSSH per-connection server daemon (10.0.0.1:48600). Sep 9 00:09:07.108793 systemd[1]: sshd@5-10.0.0.10:22-10.0.0.1:48596.service: Deactivated successfully. Sep 9 00:09:07.110232 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:09:07.111807 systemd-logind[1530]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:09:07.112834 systemd-logind[1530]: Removed session 6. Sep 9 00:09:07.142410 sshd[1747]: Accepted publickey for core from 10.0.0.1 port 48600 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:09:07.143569 sshd[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:09:07.147729 systemd-logind[1530]: New session 7 of user core. Sep 9 00:09:07.160358 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:09:07.210132 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:09:07.210402 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:09:07.480343 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:09:07.480589 (dockerd)[1772]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:09:07.691737 dockerd[1772]: time="2025-09-09T00:09:07.691662733Z" level=info msg="Starting up" Sep 9 00:09:07.924298 dockerd[1772]: time="2025-09-09T00:09:07.924191857Z" level=info msg="Loading containers: start." Sep 9 00:09:08.013147 kernel: Initializing XFRM netlink socket Sep 9 00:09:08.072341 systemd-networkd[1232]: docker0: Link UP Sep 9 00:09:08.094439 dockerd[1772]: time="2025-09-09T00:09:08.094283375Z" level=info msg="Loading containers: done." Sep 9 00:09:08.105210 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1045399204-merged.mount: Deactivated successfully. Sep 9 00:09:08.108593 dockerd[1772]: time="2025-09-09T00:09:08.108472272Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:09:08.108593 dockerd[1772]: time="2025-09-09T00:09:08.108573482Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 00:09:08.108708 dockerd[1772]: time="2025-09-09T00:09:08.108668283Z" level=info msg="Daemon has completed initialization" Sep 9 00:09:08.136965 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:09:08.137211 dockerd[1772]: time="2025-09-09T00:09:08.136738278Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:09:08.648727 containerd[1552]: time="2025-09-09T00:09:08.648687849Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 00:09:09.226248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2121255200.mount: Deactivated successfully. Sep 9 00:09:10.253758 containerd[1552]: time="2025-09-09T00:09:10.253687830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:10.254927 containerd[1552]: time="2025-09-09T00:09:10.254882953Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 9 00:09:10.256288 containerd[1552]: time="2025-09-09T00:09:10.256257035Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:10.260121 containerd[1552]: time="2025-09-09T00:09:10.258923754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:10.260213 containerd[1552]: time="2025-09-09T00:09:10.260135931Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.611399508s" Sep 9 00:09:10.260213 containerd[1552]: time="2025-09-09T00:09:10.260184965Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 9 00:09:10.261568 containerd[1552]: time="2025-09-09T00:09:10.261536228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 00:09:11.458317 containerd[1552]: time="2025-09-09T00:09:11.457389049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:11.458317 containerd[1552]: time="2025-09-09T00:09:11.457832620Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 9 00:09:11.458801 containerd[1552]: time="2025-09-09T00:09:11.458764171Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:11.461943 containerd[1552]: time="2025-09-09T00:09:11.461885222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:11.463168 containerd[1552]: time="2025-09-09T00:09:11.463137677Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.20156677s" Sep 9 00:09:11.463240 containerd[1552]: time="2025-09-09T00:09:11.463172950Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 9 00:09:11.463810 containerd[1552]: time="2025-09-09T00:09:11.463788026Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 00:09:12.849215 containerd[1552]: time="2025-09-09T00:09:12.849164948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:12.850633 containerd[1552]: time="2025-09-09T00:09:12.850595828Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 9 00:09:12.851500 containerd[1552]: time="2025-09-09T00:09:12.851458466Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:12.855632 containerd[1552]: time="2025-09-09T00:09:12.855227179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:12.856496 containerd[1552]: time="2025-09-09T00:09:12.856457557Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.392643142s" Sep 9 00:09:12.856496 containerd[1552]: time="2025-09-09T00:09:12.856495535Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 9 00:09:12.857044 containerd[1552]: time="2025-09-09T00:09:12.856902123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 00:09:12.928550 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:09:12.942437 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:09:13.033099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:09:13.037243 (kubelet)[1995]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:09:13.096589 kubelet[1995]: E0909 00:09:13.096497 1995 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:09:13.099587 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:09:13.099767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:09:13.960702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2983623896.mount: Deactivated successfully. Sep 9 00:09:14.376427 containerd[1552]: time="2025-09-09T00:09:14.376290726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:14.378614 containerd[1552]: time="2025-09-09T00:09:14.378574253Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 9 00:09:14.381988 containerd[1552]: time="2025-09-09T00:09:14.381910130Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:14.385742 containerd[1552]: time="2025-09-09T00:09:14.385693726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:14.386603 containerd[1552]: time="2025-09-09T00:09:14.386429748Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.529489698s" Sep 9 00:09:14.386603 containerd[1552]: time="2025-09-09T00:09:14.386466982Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 9 00:09:14.386987 containerd[1552]: time="2025-09-09T00:09:14.386937842Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:09:14.898810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158159198.mount: Deactivated successfully. Sep 9 00:09:15.634165 containerd[1552]: time="2025-09-09T00:09:15.634112758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:15.635550 containerd[1552]: time="2025-09-09T00:09:15.635325529Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 00:09:15.637124 containerd[1552]: time="2025-09-09T00:09:15.636860812Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:15.639941 containerd[1552]: time="2025-09-09T00:09:15.639906983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:15.642110 containerd[1552]: time="2025-09-09T00:09:15.642068617Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.255098455s" Sep 9 00:09:15.642173 containerd[1552]: time="2025-09-09T00:09:15.642122972Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 00:09:15.642532 containerd[1552]: time="2025-09-09T00:09:15.642511879Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:09:16.082081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3022617854.mount: Deactivated successfully. Sep 9 00:09:16.088079 containerd[1552]: time="2025-09-09T00:09:16.087320940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:16.088251 containerd[1552]: time="2025-09-09T00:09:16.088208923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 00:09:16.089579 containerd[1552]: time="2025-09-09T00:09:16.089392928Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:16.091249 containerd[1552]: time="2025-09-09T00:09:16.091201542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:16.092754 containerd[1552]: time="2025-09-09T00:09:16.092122492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 449.582356ms" Sep 9 00:09:16.092754 containerd[1552]: time="2025-09-09T00:09:16.092153231Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 00:09:16.093149 containerd[1552]: time="2025-09-09T00:09:16.093124525Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 00:09:16.618531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98446924.mount: Deactivated successfully. Sep 9 00:09:18.821923 containerd[1552]: time="2025-09-09T00:09:18.821871200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:18.823056 containerd[1552]: time="2025-09-09T00:09:18.823029079Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 9 00:09:18.823326 containerd[1552]: time="2025-09-09T00:09:18.823303646Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:18.826521 containerd[1552]: time="2025-09-09T00:09:18.826483483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:18.828954 containerd[1552]: time="2025-09-09T00:09:18.828921651Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.735764743s" Sep 9 00:09:18.829012 containerd[1552]: time="2025-09-09T00:09:18.828960079Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 9 00:09:23.178466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:09:23.188279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:09:23.253508 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:09:23.253676 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:09:23.254007 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:09:23.269395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:09:23.294117 systemd[1]: Reloading requested from client PID 2159 ('systemctl') (unit session-7.scope)... Sep 9 00:09:23.294132 systemd[1]: Reloading... Sep 9 00:09:23.369136 zram_generator::config[2207]: No configuration found. Sep 9 00:09:23.494875 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:09:23.547361 systemd[1]: Reloading finished in 252 ms. Sep 9 00:09:23.600835 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:09:23.603511 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:09:23.603752 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:09:23.605459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:09:23.706391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:09:23.709915 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:09:23.744221 kubelet[2258]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:09:23.744221 kubelet[2258]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:09:23.744221 kubelet[2258]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:09:23.744577 kubelet[2258]: I0909 00:09:23.744279 2258 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:09:25.625869 kubelet[2258]: I0909 00:09:25.625817 2258 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:09:25.625869 kubelet[2258]: I0909 00:09:25.625856 2258 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:09:25.626239 kubelet[2258]: I0909 00:09:25.626114 2258 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:09:25.644245 kubelet[2258]: E0909 00:09:25.644200 2258 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:09:25.647249 kubelet[2258]: I0909 00:09:25.647210 2258 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:09:25.652971 kubelet[2258]: E0909 00:09:25.652941 2258 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:09:25.652971 kubelet[2258]: I0909 00:09:25.652968 2258 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:09:25.656704 kubelet[2258]: I0909 00:09:25.656679 2258 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:09:25.657563 kubelet[2258]: I0909 00:09:25.657537 2258 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:09:25.657693 kubelet[2258]: I0909 00:09:25.657664 2258 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:09:25.657845 kubelet[2258]: I0909 00:09:25.657691 2258 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 9 00:09:25.657913 kubelet[2258]: I0909 00:09:25.657908 2258 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:09:25.657937 kubelet[2258]: I0909 00:09:25.657917 2258 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:09:25.658162 kubelet[2258]: I0909 00:09:25.658151 2258 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:09:25.660150 kubelet[2258]: I0909 00:09:25.659974 2258 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:09:25.660150 kubelet[2258]: I0909 00:09:25.660001 2258 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:09:25.660150 kubelet[2258]: I0909 00:09:25.660020 2258 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:09:25.660150 kubelet[2258]: I0909 00:09:25.660095 2258 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:09:25.661241 kubelet[2258]: W0909 00:09:25.661193 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Sep 9 00:09:25.661285 kubelet[2258]: E0909 00:09:25.661253 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.10:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:09:25.661787 kubelet[2258]: W0909 00:09:25.661747 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Sep 9 00:09:25.661894 kubelet[2258]: E0909 00:09:25.661877 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:09:25.663252 kubelet[2258]: I0909 00:09:25.663233 2258 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:09:25.663992 kubelet[2258]: I0909 00:09:25.663974 2258 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:09:25.664170 kubelet[2258]: W0909 00:09:25.664158 2258 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:09:25.665303 kubelet[2258]: I0909 00:09:25.665182 2258 server.go:1274] "Started kubelet" Sep 9 00:09:25.666655 kubelet[2258]: I0909 00:09:25.666602 2258 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:09:25.666721 kubelet[2258]: I0909 00:09:25.666639 2258 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:09:25.666947 kubelet[2258]: I0909 00:09:25.666927 2258 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:09:25.667072 kubelet[2258]: I0909 00:09:25.667057 2258 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:09:25.673077 kubelet[2258]: I0909 00:09:25.670885 2258 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:09:25.673077 kubelet[2258]: E0909 00:09:25.671724 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:09:25.673077 kubelet[2258]: I0909 00:09:25.671757 2258 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:09:25.673077 kubelet[2258]: I0909 00:09:25.672336 2258 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:09:25.673077 kubelet[2258]: I0909 00:09:25.672414 2258 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:09:25.673077 kubelet[2258]: W0909 00:09:25.672499 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Sep 9 00:09:25.673077 kubelet[2258]: E0909 00:09:25.672543 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:09:25.673077 kubelet[2258]: I0909 00:09:25.672719 2258 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:09:25.673077 kubelet[2258]: I0909 00:09:25.672807 2258 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:09:25.673077 kubelet[2258]: E0909 00:09:25.672808 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="200ms" Sep 9 00:09:25.674086 kubelet[2258]: I0909 00:09:25.674069 2258 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:09:25.675996 kubelet[2258]: E0909 00:09:25.674572 2258 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.10:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.10:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186374aced114952 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:09:25.66515541 +0000 UTC m=+1.952263702,LastTimestamp:2025-09-09 00:09:25.66515541 +0000 UTC m=+1.952263702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:09:25.675996 kubelet[2258]: E0909 00:09:25.675890 2258 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:09:25.677227 kubelet[2258]: I0909 00:09:25.677208 2258 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:09:25.689053 kubelet[2258]: I0909 00:09:25.688926 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:09:25.690119 kubelet[2258]: I0909 00:09:25.690091 2258 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:09:25.690221 kubelet[2258]: I0909 00:09:25.690210 2258 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:09:25.690779 kubelet[2258]: I0909 00:09:25.690273 2258 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:09:25.690779 kubelet[2258]: E0909 00:09:25.690319 2258 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:09:25.690779 kubelet[2258]: W0909 00:09:25.690716 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Sep 9 00:09:25.690779 kubelet[2258]: E0909 00:09:25.690743 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:09:25.695583 kubelet[2258]: I0909 00:09:25.695565 2258 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:09:25.695657 kubelet[2258]: I0909 00:09:25.695646 2258 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:09:25.695726 kubelet[2258]: I0909 00:09:25.695718 2258 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:09:25.697747 kubelet[2258]: I0909 00:09:25.697727 2258 policy_none.go:49] "None policy: Start" Sep 9 00:09:25.698389 kubelet[2258]: I0909 00:09:25.698370 2258 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:09:25.698494 kubelet[2258]: I0909 00:09:25.698484 2258 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:09:25.705284 kubelet[2258]: I0909 00:09:25.705260 2258 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:09:25.705567 kubelet[2258]: I0909 00:09:25.705549 2258 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:09:25.705655 kubelet[2258]: I0909 00:09:25.705627 2258 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:09:25.706192 kubelet[2258]: I0909 00:09:25.706178 2258 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:09:25.707035 kubelet[2258]: E0909 00:09:25.707013 2258 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:09:25.807278 kubelet[2258]: I0909 00:09:25.807228 2258 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:09:25.807727 kubelet[2258]: E0909 00:09:25.807703 2258 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Sep 9 00:09:25.873252 kubelet[2258]: E0909 00:09:25.873216 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="400ms" Sep 9 00:09:25.973728 kubelet[2258]: I0909 00:09:25.973477 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f287b54b96d45de1449f4bf35ac86916-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f287b54b96d45de1449f4bf35ac86916\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:09:25.973728 kubelet[2258]: I0909 00:09:25.973513 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f287b54b96d45de1449f4bf35ac86916-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f287b54b96d45de1449f4bf35ac86916\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:09:25.973728 kubelet[2258]: I0909 00:09:25.973532 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:09:25.973728 kubelet[2258]: I0909 00:09:25.973550 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:09:25.973728 kubelet[2258]: I0909 00:09:25.973567 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:09:25.973933 kubelet[2258]: I0909 00:09:25.973606 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f287b54b96d45de1449f4bf35ac86916-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f287b54b96d45de1449f4bf35ac86916\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:09:25.973933 kubelet[2258]: I0909 00:09:25.973622 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:09:25.973933 kubelet[2258]: I0909 00:09:25.973638 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:09:25.973933 kubelet[2258]: I0909 00:09:25.973653 2258 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:09:26.009773 kubelet[2258]: I0909 00:09:26.009741 2258 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:09:26.010129 kubelet[2258]: E0909 00:09:26.010082 2258 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Sep 9 00:09:26.095709 kubelet[2258]: E0909 00:09:26.095676 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:26.096385 containerd[1552]: time="2025-09-09T00:09:26.096348555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f287b54b96d45de1449f4bf35ac86916,Namespace:kube-system,Attempt:0,}" Sep 9 00:09:26.096827 kubelet[2258]: E0909 00:09:26.096803 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:26.097142 containerd[1552]: time="2025-09-09T00:09:26.097094248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 00:09:26.098321 kubelet[2258]: E0909 00:09:26.098294 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:26.098721 containerd[1552]: time="2025-09-09T00:09:26.098582358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 00:09:26.274331 kubelet[2258]: E0909 00:09:26.274213 2258 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.10:6443: connect: connection refused" interval="800ms" Sep 9 00:09:26.411971 kubelet[2258]: I0909 00:09:26.411932 2258 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:09:26.412345 kubelet[2258]: E0909 00:09:26.412311 2258 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.10:6443/api/v1/nodes\": dial tcp 10.0.0.10:6443: connect: connection refused" node="localhost" Sep 9 00:09:26.587268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327174858.mount: Deactivated successfully. Sep 9 00:09:26.592922 containerd[1552]: time="2025-09-09T00:09:26.592884507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:09:26.593936 containerd[1552]: time="2025-09-09T00:09:26.593874184Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:09:26.594736 containerd[1552]: time="2025-09-09T00:09:26.594704067Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:09:26.598116 containerd[1552]: time="2025-09-09T00:09:26.596507492Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:09:26.598116 containerd[1552]: time="2025-09-09T00:09:26.597424321Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 9 00:09:26.598262 containerd[1552]: time="2025-09-09T00:09:26.598227844Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:09:26.599495 containerd[1552]: time="2025-09-09T00:09:26.599467097Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:09:26.602162 containerd[1552]: time="2025-09-09T00:09:26.602125566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:09:26.603211 containerd[1552]: time="2025-09-09T00:09:26.603181701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 504.535762ms" Sep 9 00:09:26.606024 containerd[1552]: time="2025-09-09T00:09:26.605992096Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 508.828276ms" Sep 9 00:09:26.606644 containerd[1552]: time="2025-09-09T00:09:26.606617694Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 510.188742ms" Sep 9 00:09:26.607317 kubelet[2258]: W0909 00:09:26.607260 2258 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.10:6443: connect: connection refused Sep 9 00:09:26.607389 kubelet[2258]: E0909 00:09:26.607333 2258 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.10:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:09:26.714532 containerd[1552]: time="2025-09-09T00:09:26.714292280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:09:26.714532 containerd[1552]: time="2025-09-09T00:09:26.714385457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:09:26.714532 containerd[1552]: time="2025-09-09T00:09:26.714411097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:26.715862 containerd[1552]: time="2025-09-09T00:09:26.715683659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:09:26.715862 containerd[1552]: time="2025-09-09T00:09:26.715745004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:09:26.715862 containerd[1552]: time="2025-09-09T00:09:26.715759582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:26.715991 containerd[1552]: time="2025-09-09T00:09:26.715884070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:09:26.716014 containerd[1552]: time="2025-09-09T00:09:26.715992983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:09:26.716035 containerd[1552]: time="2025-09-09T00:09:26.716021379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:26.716610 containerd[1552]: time="2025-09-09T00:09:26.716546291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:26.716979 containerd[1552]: time="2025-09-09T00:09:26.716835127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:26.716979 containerd[1552]: time="2025-09-09T00:09:26.716752574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:26.764802 containerd[1552]: time="2025-09-09T00:09:26.764745481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f287b54b96d45de1449f4bf35ac86916,Namespace:kube-system,Attempt:0,} returns sandbox id \"c767685b064ca18b015be7bca38417fb54bfc2f92d8c2eb0e09202df9488a2ad\"" Sep 9 00:09:26.768228 kubelet[2258]: E0909 00:09:26.768196 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:26.768560 containerd[1552]: time="2025-09-09T00:09:26.768440994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"4da56a175ac45180291dbd7a407385d5f95a5d80af3e1eec9002e30711b96bdc\"" Sep 9 00:09:26.770312 kubelet[2258]: E0909 00:09:26.770273 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:26.771172 containerd[1552]: time="2025-09-09T00:09:26.771141838Z" level=info msg="CreateContainer within sandbox \"c767685b064ca18b015be7bca38417fb54bfc2f92d8c2eb0e09202df9488a2ad\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:09:26.771343 containerd[1552]: time="2025-09-09T00:09:26.771320962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"320b14cc275b883060d4e608b42e3f4712e011dea26ab41072457d6c05ec7452\"" Sep 9 00:09:26.772170 kubelet[2258]: E0909 00:09:26.772017 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:26.772985 containerd[1552]: time="2025-09-09T00:09:26.772944144Z" level=info msg="CreateContainer within sandbox \"4da56a175ac45180291dbd7a407385d5f95a5d80af3e1eec9002e30711b96bdc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:09:26.774381 containerd[1552]: time="2025-09-09T00:09:26.774351778Z" level=info msg="CreateContainer within sandbox \"320b14cc275b883060d4e608b42e3f4712e011dea26ab41072457d6c05ec7452\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:09:26.786063 containerd[1552]: time="2025-09-09T00:09:26.786021900Z" level=info msg="CreateContainer within sandbox \"c767685b064ca18b015be7bca38417fb54bfc2f92d8c2eb0e09202df9488a2ad\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"10e9a6171e30e5e9a9a2c9bde2e7e61f157d7fac65e2fd621118787092f944de\"" Sep 9 00:09:26.786926 containerd[1552]: time="2025-09-09T00:09:26.786898910Z" level=info msg="StartContainer for \"10e9a6171e30e5e9a9a2c9bde2e7e61f157d7fac65e2fd621118787092f944de\"" Sep 9 00:09:26.790431 containerd[1552]: time="2025-09-09T00:09:26.790392933Z" level=info msg="CreateContainer within sandbox \"4da56a175ac45180291dbd7a407385d5f95a5d80af3e1eec9002e30711b96bdc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7ccdfc674d33032b16934c2106c93847747e04fc44693a5340518421cfc8b0c2\"" Sep 9 00:09:26.792475 containerd[1552]: time="2025-09-09T00:09:26.792447212Z" level=info msg="StartContainer for \"7ccdfc674d33032b16934c2106c93847747e04fc44693a5340518421cfc8b0c2\"" Sep 9 00:09:26.792864 containerd[1552]: time="2025-09-09T00:09:26.792782616Z" level=info msg="CreateContainer within sandbox \"320b14cc275b883060d4e608b42e3f4712e011dea26ab41072457d6c05ec7452\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a9376d46b6eb6bc7e782ef57c194d306e6af104be4f253dda5056e564adfdada\"" Sep 9 00:09:26.793386 containerd[1552]: time="2025-09-09T00:09:26.793358690Z" level=info msg="StartContainer for \"a9376d46b6eb6bc7e782ef57c194d306e6af104be4f253dda5056e564adfdada\"" Sep 9 00:09:26.852240 containerd[1552]: time="2025-09-09T00:09:26.852047537Z" level=info msg="StartContainer for \"10e9a6171e30e5e9a9a2c9bde2e7e61f157d7fac65e2fd621118787092f944de\" returns successfully" Sep 9 00:09:26.852240 containerd[1552]: time="2025-09-09T00:09:26.852171626Z" level=info msg="StartContainer for \"7ccdfc674d33032b16934c2106c93847747e04fc44693a5340518421cfc8b0c2\" returns successfully" Sep 9 00:09:26.859405 containerd[1552]: time="2025-09-09T00:09:26.859307485Z" level=info msg="StartContainer for \"a9376d46b6eb6bc7e782ef57c194d306e6af104be4f253dda5056e564adfdada\" returns successfully" Sep 9 00:09:27.214122 kubelet[2258]: I0909 00:09:27.214074 2258 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:09:27.700886 kubelet[2258]: E0909 00:09:27.700523 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:27.705147 kubelet[2258]: E0909 00:09:27.705119 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:27.706370 kubelet[2258]: E0909 00:09:27.706349 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:28.056399 kubelet[2258]: E0909 00:09:28.056222 2258 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:09:28.129820 kubelet[2258]: I0909 00:09:28.129544 2258 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:09:28.129820 kubelet[2258]: E0909 00:09:28.129622 2258 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:09:28.144000 kubelet[2258]: E0909 00:09:28.143963 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:09:28.244281 kubelet[2258]: E0909 00:09:28.244243 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:09:28.346264 kubelet[2258]: E0909 00:09:28.346144 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:09:28.446659 kubelet[2258]: E0909 00:09:28.446617 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:09:28.547158 kubelet[2258]: E0909 00:09:28.547099 2258 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:09:28.662166 kubelet[2258]: I0909 00:09:28.662043 2258 apiserver.go:52] "Watching apiserver" Sep 9 00:09:28.672775 kubelet[2258]: I0909 00:09:28.672747 2258 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:09:28.712270 kubelet[2258]: E0909 00:09:28.712234 2258 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 00:09:28.712416 kubelet[2258]: E0909 00:09:28.712400 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:29.847519 kubelet[2258]: E0909 00:09:29.847471 2258 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:30.151327 systemd[1]: Reloading requested from client PID 2534 ('systemctl') (unit session-7.scope)... Sep 9 00:09:30.151715 systemd[1]: Reloading... Sep 9 00:09:30.226141 zram_generator::config[2576]: No configuration found. Sep 9 00:09:30.378296 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:09:30.440652 systemd[1]: Reloading finished in 288 ms. Sep 9 00:09:30.468232 kubelet[2258]: I0909 00:09:30.468194 2258 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:09:30.468331 kubelet[2258]: E0909 00:09:30.468194 2258 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.186374aced114952 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:09:25.66515541 +0000 UTC m=+1.952263702,LastTimestamp:2025-09-09 00:09:25.66515541 +0000 UTC m=+1.952263702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:09:30.468971 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:09:30.478449 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:09:30.478729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:09:30.489336 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:09:30.582757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:09:30.597582 (kubelet)[2625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:09:30.639355 kubelet[2625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:09:30.639355 kubelet[2625]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 00:09:30.639355 kubelet[2625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:09:30.639704 kubelet[2625]: I0909 00:09:30.639400 2625 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:09:30.646139 kubelet[2625]: I0909 00:09:30.646091 2625 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 00:09:30.646139 kubelet[2625]: I0909 00:09:30.646136 2625 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:09:30.646389 kubelet[2625]: I0909 00:09:30.646359 2625 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 00:09:30.647659 kubelet[2625]: I0909 00:09:30.647642 2625 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:09:30.649706 kubelet[2625]: I0909 00:09:30.649601 2625 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:09:30.652426 kubelet[2625]: E0909 00:09:30.652392 2625 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:09:30.652426 kubelet[2625]: I0909 00:09:30.652427 2625 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:09:30.654828 kubelet[2625]: I0909 00:09:30.654793 2625 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:09:30.657145 kubelet[2625]: I0909 00:09:30.655214 2625 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 00:09:30.657145 kubelet[2625]: I0909 00:09:30.655325 2625 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:09:30.657145 kubelet[2625]: I0909 00:09:30.655350 2625 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 9 00:09:30.657145 kubelet[2625]: I0909 00:09:30.655731 2625 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:09:30.657440 kubelet[2625]: I0909 00:09:30.655741 2625 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 00:09:30.657440 kubelet[2625]: I0909 00:09:30.655780 2625 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:09:30.657440 kubelet[2625]: I0909 00:09:30.655868 2625 kubelet.go:408] "Attempting to sync node with API server" Sep 9 00:09:30.657440 kubelet[2625]: I0909 00:09:30.655885 2625 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:09:30.657440 kubelet[2625]: I0909 00:09:30.655919 2625 kubelet.go:314] "Adding apiserver pod source" Sep 9 00:09:30.657440 kubelet[2625]: I0909 00:09:30.655932 2625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:09:30.662172 kubelet[2625]: I0909 00:09:30.661636 2625 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:09:30.663204 kubelet[2625]: I0909 00:09:30.663178 2625 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:09:30.664490 kubelet[2625]: I0909 00:09:30.664463 2625 server.go:1274] "Started kubelet" Sep 9 00:09:30.665854 kubelet[2625]: I0909 00:09:30.665829 2625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:09:30.670653 kubelet[2625]: I0909 00:09:30.670613 2625 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:09:30.671437 kubelet[2625]: I0909 00:09:30.671415 2625 server.go:449] "Adding debug handlers to kubelet server" Sep 9 00:09:30.672279 kubelet[2625]: I0909 00:09:30.672230 2625 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:09:30.672421 kubelet[2625]: I0909 00:09:30.672405 2625 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:09:30.672596 kubelet[2625]: I0909 00:09:30.672578 2625 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:09:30.673559 kubelet[2625]: I0909 00:09:30.673542 2625 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 00:09:30.673727 kubelet[2625]: E0909 00:09:30.673708 2625 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:09:30.675944 kubelet[2625]: I0909 00:09:30.675886 2625 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 00:09:30.676051 kubelet[2625]: I0909 00:09:30.676023 2625 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:09:30.678068 kubelet[2625]: I0909 00:09:30.678023 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:09:30.679323 kubelet[2625]: I0909 00:09:30.679302 2625 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:09:30.679323 kubelet[2625]: I0909 00:09:30.679319 2625 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:09:30.679436 kubelet[2625]: I0909 00:09:30.679407 2625 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:09:30.688645 kubelet[2625]: I0909 00:09:30.688291 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:09:30.688645 kubelet[2625]: I0909 00:09:30.688315 2625 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 00:09:30.688645 kubelet[2625]: I0909 00:09:30.688328 2625 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 00:09:30.688645 kubelet[2625]: E0909 00:09:30.688376 2625 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:09:30.725531 kubelet[2625]: I0909 00:09:30.725439 2625 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 00:09:30.725531 kubelet[2625]: I0909 00:09:30.725457 2625 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 00:09:30.725531 kubelet[2625]: I0909 00:09:30.725476 2625 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:09:30.725668 kubelet[2625]: I0909 00:09:30.725614 2625 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:09:30.725668 kubelet[2625]: I0909 00:09:30.725625 2625 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:09:30.725668 kubelet[2625]: I0909 00:09:30.725643 2625 policy_none.go:49] "None policy: Start" Sep 9 00:09:30.728500 kubelet[2625]: I0909 00:09:30.728479 2625 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 00:09:30.728500 kubelet[2625]: I0909 00:09:30.728506 2625 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:09:30.728670 kubelet[2625]: I0909 00:09:30.728654 2625 state_mem.go:75] "Updated machine memory state" Sep 9 00:09:30.730552 kubelet[2625]: I0909 00:09:30.729763 2625 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:09:30.730552 kubelet[2625]: I0909 00:09:30.729937 2625 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:09:30.730552 kubelet[2625]: I0909 00:09:30.729951 2625 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:09:30.730552 kubelet[2625]: I0909 00:09:30.730187 2625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:09:30.795574 kubelet[2625]: E0909 00:09:30.795533 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:09:30.833869 kubelet[2625]: I0909 00:09:30.833844 2625 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 00:09:30.840720 kubelet[2625]: I0909 00:09:30.840655 2625 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 00:09:30.840720 kubelet[2625]: I0909 00:09:30.840722 2625 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 00:09:30.877322 kubelet[2625]: I0909 00:09:30.877277 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f287b54b96d45de1449f4bf35ac86916-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f287b54b96d45de1449f4bf35ac86916\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:09:30.877322 kubelet[2625]: I0909 00:09:30.877314 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f287b54b96d45de1449f4bf35ac86916-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f287b54b96d45de1449f4bf35ac86916\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:09:30.877448 kubelet[2625]: I0909 00:09:30.877335 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:09:30.877448 kubelet[2625]: I0909 00:09:30.877351 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:09:30.877448 kubelet[2625]: I0909 00:09:30.877371 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:09:30.877448 kubelet[2625]: I0909 00:09:30.877388 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f287b54b96d45de1449f4bf35ac86916-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f287b54b96d45de1449f4bf35ac86916\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:09:30.877448 kubelet[2625]: I0909 00:09:30.877403 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:09:30.877552 kubelet[2625]: I0909 00:09:30.877418 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:09:30.877552 kubelet[2625]: I0909 00:09:30.877435 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:09:31.096877 kubelet[2625]: E0909 00:09:31.096728 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:31.096877 kubelet[2625]: E0909 00:09:31.096739 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:31.098116 kubelet[2625]: E0909 00:09:31.097224 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:31.656901 kubelet[2625]: I0909 00:09:31.656862 2625 apiserver.go:52] "Watching apiserver" Sep 9 00:09:31.676237 kubelet[2625]: I0909 00:09:31.676212 2625 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 00:09:31.701923 kubelet[2625]: E0909 00:09:31.701666 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:31.701923 kubelet[2625]: E0909 00:09:31.701868 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:31.706274 kubelet[2625]: E0909 00:09:31.706233 2625 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:09:31.706405 kubelet[2625]: E0909 00:09:31.706392 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:31.722436 kubelet[2625]: I0909 00:09:31.722206 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.7221892260000002 podStartE2EDuration="2.722189226s" podCreationTimestamp="2025-09-09 00:09:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:09:31.722153095 +0000 UTC m=+1.121584757" watchObservedRunningTime="2025-09-09 00:09:31.722189226 +0000 UTC m=+1.121620888" Sep 9 00:09:31.732572 kubelet[2625]: I0909 00:09:31.732359 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7323404199999999 podStartE2EDuration="1.73234042s" podCreationTimestamp="2025-09-09 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:09:31.72958975 +0000 UTC m=+1.129021412" watchObservedRunningTime="2025-09-09 00:09:31.73234042 +0000 UTC m=+1.131772082" Sep 9 00:09:31.738156 kubelet[2625]: I0909 00:09:31.737950 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.737936687 podStartE2EDuration="1.737936687s" podCreationTimestamp="2025-09-09 00:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:09:31.737921898 +0000 UTC m=+1.137353520" watchObservedRunningTime="2025-09-09 00:09:31.737936687 +0000 UTC m=+1.137368348" Sep 9 00:09:32.703715 kubelet[2625]: E0909 00:09:32.703360 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:33.503617 kubelet[2625]: E0909 00:09:33.503515 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:33.704396 kubelet[2625]: E0909 00:09:33.704271 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:37.074608 kubelet[2625]: I0909 00:09:37.074581 2625 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:09:37.074959 containerd[1552]: time="2025-09-09T00:09:37.074899229Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:09:37.075175 kubelet[2625]: I0909 00:09:37.075069 2625 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:09:37.559486 kubelet[2625]: E0909 00:09:37.559413 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:37.710444 kubelet[2625]: E0909 00:09:37.710410 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:38.031472 kubelet[2625]: I0909 00:09:38.031433 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11ea5bbf-6f87-4424-aa16-ec5e9cecc788-kube-proxy\") pod \"kube-proxy-zdtxv\" (UID: \"11ea5bbf-6f87-4424-aa16-ec5e9cecc788\") " pod="kube-system/kube-proxy-zdtxv" Sep 9 00:09:38.031472 kubelet[2625]: I0909 00:09:38.031477 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11ea5bbf-6f87-4424-aa16-ec5e9cecc788-xtables-lock\") pod \"kube-proxy-zdtxv\" (UID: \"11ea5bbf-6f87-4424-aa16-ec5e9cecc788\") " pod="kube-system/kube-proxy-zdtxv" Sep 9 00:09:38.031700 kubelet[2625]: I0909 00:09:38.031497 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11ea5bbf-6f87-4424-aa16-ec5e9cecc788-lib-modules\") pod \"kube-proxy-zdtxv\" (UID: \"11ea5bbf-6f87-4424-aa16-ec5e9cecc788\") " pod="kube-system/kube-proxy-zdtxv" Sep 9 00:09:38.031700 kubelet[2625]: I0909 00:09:38.031516 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7dk\" (UniqueName: \"kubernetes.io/projected/11ea5bbf-6f87-4424-aa16-ec5e9cecc788-kube-api-access-mg7dk\") pod \"kube-proxy-zdtxv\" (UID: \"11ea5bbf-6f87-4424-aa16-ec5e9cecc788\") " pod="kube-system/kube-proxy-zdtxv" Sep 9 00:09:38.232863 kubelet[2625]: I0909 00:09:38.232755 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2djst\" (UniqueName: \"kubernetes.io/projected/74eea0b8-4e29-419e-984a-892f070d75aa-kube-api-access-2djst\") pod \"tigera-operator-58fc44c59b-9llx6\" (UID: \"74eea0b8-4e29-419e-984a-892f070d75aa\") " pod="tigera-operator/tigera-operator-58fc44c59b-9llx6" Sep 9 00:09:38.232863 kubelet[2625]: I0909 00:09:38.232832 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/74eea0b8-4e29-419e-984a-892f070d75aa-var-lib-calico\") pod \"tigera-operator-58fc44c59b-9llx6\" (UID: \"74eea0b8-4e29-419e-984a-892f070d75aa\") " pod="tigera-operator/tigera-operator-58fc44c59b-9llx6" Sep 9 00:09:38.249749 kubelet[2625]: E0909 00:09:38.248978 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:38.249838 containerd[1552]: time="2025-09-09T00:09:38.249755717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zdtxv,Uid:11ea5bbf-6f87-4424-aa16-ec5e9cecc788,Namespace:kube-system,Attempt:0,}" Sep 9 00:09:38.272752 containerd[1552]: time="2025-09-09T00:09:38.272666308Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:09:38.272752 containerd[1552]: time="2025-09-09T00:09:38.272723200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:09:38.272856 containerd[1552]: time="2025-09-09T00:09:38.272738003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:38.272920 containerd[1552]: time="2025-09-09T00:09:38.272845145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:38.301093 containerd[1552]: time="2025-09-09T00:09:38.300999134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zdtxv,Uid:11ea5bbf-6f87-4424-aa16-ec5e9cecc788,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fa402b593d5352fc7f2b29b488fe75edfb0e297ea56b42aec21aac0dfca01e5\"" Sep 9 00:09:38.302432 kubelet[2625]: E0909 00:09:38.302408 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:38.304195 containerd[1552]: time="2025-09-09T00:09:38.304082688Z" level=info msg="CreateContainer within sandbox \"4fa402b593d5352fc7f2b29b488fe75edfb0e297ea56b42aec21aac0dfca01e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:09:38.322322 containerd[1552]: time="2025-09-09T00:09:38.322290272Z" level=info msg="CreateContainer within sandbox \"4fa402b593d5352fc7f2b29b488fe75edfb0e297ea56b42aec21aac0dfca01e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3221d11d6a0b03f7f404c0d2a24c007de3617c60963e945a9a6106fc963ac410\"" Sep 9 00:09:38.322772 containerd[1552]: time="2025-09-09T00:09:38.322753087Z" level=info msg="StartContainer for \"3221d11d6a0b03f7f404c0d2a24c007de3617c60963e945a9a6106fc963ac410\"" Sep 9 00:09:38.367582 containerd[1552]: time="2025-09-09T00:09:38.367490447Z" level=info msg="StartContainer for \"3221d11d6a0b03f7f404c0d2a24c007de3617c60963e945a9a6106fc963ac410\" returns successfully" Sep 9 00:09:38.509196 containerd[1552]: time="2025-09-09T00:09:38.509148937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-9llx6,Uid:74eea0b8-4e29-419e-984a-892f070d75aa,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:09:38.536697 containerd[1552]: time="2025-09-09T00:09:38.536512124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:09:38.536697 containerd[1552]: time="2025-09-09T00:09:38.536554613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:09:38.536697 containerd[1552]: time="2025-09-09T00:09:38.536575737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:38.536697 containerd[1552]: time="2025-09-09T00:09:38.536657114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:38.582469 containerd[1552]: time="2025-09-09T00:09:38.582372755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-9llx6,Uid:74eea0b8-4e29-419e-984a-892f070d75aa,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0a8d297b8c84d16709d887a103bd966f1778c528700cfc7b936b4e6f3960476c\"" Sep 9 00:09:38.585027 containerd[1552]: time="2025-09-09T00:09:38.584991613Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:09:38.714765 kubelet[2625]: E0909 00:09:38.714714 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:39.161522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount247867286.mount: Deactivated successfully. Sep 9 00:09:39.548305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4039115063.mount: Deactivated successfully. Sep 9 00:09:39.925229 containerd[1552]: time="2025-09-09T00:09:39.925091981Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:39.926236 containerd[1552]: time="2025-09-09T00:09:39.926199596Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 9 00:09:39.927264 containerd[1552]: time="2025-09-09T00:09:39.927235798Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:39.930174 containerd[1552]: time="2025-09-09T00:09:39.930144484Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:39.931126 containerd[1552]: time="2025-09-09T00:09:39.930918595Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.345886054s" Sep 9 00:09:39.931126 containerd[1552]: time="2025-09-09T00:09:39.930955042Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 9 00:09:39.936036 containerd[1552]: time="2025-09-09T00:09:39.935986862Z" level=info msg="CreateContainer within sandbox \"0a8d297b8c84d16709d887a103bd966f1778c528700cfc7b936b4e6f3960476c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:09:39.949014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121478134.mount: Deactivated successfully. Sep 9 00:09:39.951875 containerd[1552]: time="2025-09-09T00:09:39.951837548Z" level=info msg="CreateContainer within sandbox \"0a8d297b8c84d16709d887a103bd966f1778c528700cfc7b936b4e6f3960476c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"45d25d13e534e75e29de162ed0d44be36a7789d7f0faebf609665bcd5c9e141b\"" Sep 9 00:09:39.952245 containerd[1552]: time="2025-09-09T00:09:39.952220542Z" level=info msg="StartContainer for \"45d25d13e534e75e29de162ed0d44be36a7789d7f0faebf609665bcd5c9e141b\"" Sep 9 00:09:39.997670 containerd[1552]: time="2025-09-09T00:09:39.997557448Z" level=info msg="StartContainer for \"45d25d13e534e75e29de162ed0d44be36a7789d7f0faebf609665bcd5c9e141b\" returns successfully" Sep 9 00:09:40.730913 kubelet[2625]: I0909 00:09:40.730579 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zdtxv" podStartSLOduration=3.730562534 podStartE2EDuration="3.730562534s" podCreationTimestamp="2025-09-09 00:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:09:38.729001187 +0000 UTC m=+8.128432889" watchObservedRunningTime="2025-09-09 00:09:40.730562534 +0000 UTC m=+10.129994196" Sep 9 00:09:41.360761 kubelet[2625]: E0909 00:09:41.360469 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:41.377297 kubelet[2625]: I0909 00:09:41.377224 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-9llx6" podStartSLOduration=2.027008296 podStartE2EDuration="3.377205955s" podCreationTimestamp="2025-09-09 00:09:38 +0000 UTC" firstStartedPulling="2025-09-09 00:09:38.584524437 +0000 UTC m=+7.983956099" lastFinishedPulling="2025-09-09 00:09:39.934722096 +0000 UTC m=+9.334153758" observedRunningTime="2025-09-09 00:09:40.730829383 +0000 UTC m=+10.130261045" watchObservedRunningTime="2025-09-09 00:09:41.377205955 +0000 UTC m=+10.776637617" Sep 9 00:09:41.748122 kubelet[2625]: E0909 00:09:41.744822 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:43.516742 kubelet[2625]: E0909 00:09:43.516708 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:45.463202 sudo[1753]: pam_unix(sudo:session): session closed for user root Sep 9 00:09:45.465506 sshd[1747]: pam_unix(sshd:session): session closed for user core Sep 9 00:09:45.468463 systemd[1]: sshd@6-10.0.0.10:22-10.0.0.1:48600.service: Deactivated successfully. Sep 9 00:09:45.483632 systemd-logind[1530]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:09:45.483998 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:09:45.487444 systemd-logind[1530]: Removed session 7. Sep 9 00:09:46.145222 update_engine[1533]: I20250909 00:09:46.145151 1533 update_attempter.cc:509] Updating boot flags... Sep 9 00:09:46.178133 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3029) Sep 9 00:09:46.229134 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3028) Sep 9 00:09:50.206810 kubelet[2625]: I0909 00:09:50.206754 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a13c7762-ba84-432f-ae1a-44119eee5fe9-tigera-ca-bundle\") pod \"calico-typha-6b67b6b98c-74zs7\" (UID: \"a13c7762-ba84-432f-ae1a-44119eee5fe9\") " pod="calico-system/calico-typha-6b67b6b98c-74zs7" Sep 9 00:09:50.206810 kubelet[2625]: I0909 00:09:50.206805 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwdmk\" (UniqueName: \"kubernetes.io/projected/a13c7762-ba84-432f-ae1a-44119eee5fe9-kube-api-access-bwdmk\") pod \"calico-typha-6b67b6b98c-74zs7\" (UID: \"a13c7762-ba84-432f-ae1a-44119eee5fe9\") " pod="calico-system/calico-typha-6b67b6b98c-74zs7" Sep 9 00:09:50.207217 kubelet[2625]: I0909 00:09:50.206821 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a13c7762-ba84-432f-ae1a-44119eee5fe9-typha-certs\") pod \"calico-typha-6b67b6b98c-74zs7\" (UID: \"a13c7762-ba84-432f-ae1a-44119eee5fe9\") " pod="calico-system/calico-typha-6b67b6b98c-74zs7" Sep 9 00:09:50.453228 kubelet[2625]: E0909 00:09:50.452300 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:50.454582 containerd[1552]: time="2025-09-09T00:09:50.454541672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b67b6b98c-74zs7,Uid:a13c7762-ba84-432f-ae1a-44119eee5fe9,Namespace:calico-system,Attempt:0,}" Sep 9 00:09:50.498270 containerd[1552]: time="2025-09-09T00:09:50.494941361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:09:50.498270 containerd[1552]: time="2025-09-09T00:09:50.495062694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:09:50.498270 containerd[1552]: time="2025-09-09T00:09:50.495092618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:50.498270 containerd[1552]: time="2025-09-09T00:09:50.495326924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:50.511253 kubelet[2625]: I0909 00:09:50.510054 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-cni-net-dir\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511253 kubelet[2625]: I0909 00:09:50.510095 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-xtables-lock\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511253 kubelet[2625]: I0909 00:09:50.510135 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-flexvol-driver-host\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511253 kubelet[2625]: I0909 00:09:50.510153 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-var-lib-calico\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511253 kubelet[2625]: I0909 00:09:50.510168 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-node-certs\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511486 kubelet[2625]: I0909 00:09:50.510182 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-lib-modules\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511486 kubelet[2625]: I0909 00:09:50.510197 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-var-run-calico\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511486 kubelet[2625]: I0909 00:09:50.510214 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-cni-log-dir\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511486 kubelet[2625]: I0909 00:09:50.510230 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpl6m\" (UniqueName: \"kubernetes.io/projected/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-kube-api-access-kpl6m\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511486 kubelet[2625]: I0909 00:09:50.510248 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-tigera-ca-bundle\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511594 kubelet[2625]: I0909 00:09:50.510273 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-policysync\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.511594 kubelet[2625]: I0909 00:09:50.510290 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/60f1e1c3-5701-42b4-8f19-59e5aad9baf9-cni-bin-dir\") pod \"calico-node-47d4b\" (UID: \"60f1e1c3-5701-42b4-8f19-59e5aad9baf9\") " pod="calico-system/calico-node-47d4b" Sep 9 00:09:50.553684 containerd[1552]: time="2025-09-09T00:09:50.553599599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b67b6b98c-74zs7,Uid:a13c7762-ba84-432f-ae1a-44119eee5fe9,Namespace:calico-system,Attempt:0,} returns sandbox id \"7c02da7d55a8c789aaf13d3bcb136e68e97894192158d44022622b4bd4570f39\"" Sep 9 00:09:50.554376 kubelet[2625]: E0909 00:09:50.554350 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:50.555391 containerd[1552]: time="2025-09-09T00:09:50.555367955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:09:50.612532 kubelet[2625]: E0909 00:09:50.612464 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.612532 kubelet[2625]: W0909 00:09:50.612524 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.612682 kubelet[2625]: E0909 00:09:50.612548 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.612864 kubelet[2625]: E0909 00:09:50.612844 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.612864 kubelet[2625]: W0909 00:09:50.612861 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.612956 kubelet[2625]: E0909 00:09:50.612927 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.613247 kubelet[2625]: E0909 00:09:50.613232 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.613247 kubelet[2625]: W0909 00:09:50.613247 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.613322 kubelet[2625]: E0909 00:09:50.613263 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.625896 kubelet[2625]: E0909 00:09:50.625747 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.625896 kubelet[2625]: W0909 00:09:50.625766 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.625896 kubelet[2625]: E0909 00:09:50.625785 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.626180 kubelet[2625]: E0909 00:09:50.626150 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.626180 kubelet[2625]: W0909 00:09:50.626162 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.626627 kubelet[2625]: E0909 00:09:50.626584 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.626857 kubelet[2625]: E0909 00:09:50.626747 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.626857 kubelet[2625]: W0909 00:09:50.626762 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.626857 kubelet[2625]: E0909 00:09:50.626776 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.627028 kubelet[2625]: E0909 00:09:50.627011 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.627086 kubelet[2625]: W0909 00:09:50.627076 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.627242 kubelet[2625]: E0909 00:09:50.627216 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.627376 kubelet[2625]: E0909 00:09:50.627365 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.627470 kubelet[2625]: W0909 00:09:50.627422 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.627515 kubelet[2625]: E0909 00:09:50.627467 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.627801 kubelet[2625]: E0909 00:09:50.627751 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.627801 kubelet[2625]: W0909 00:09:50.627762 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.627916 kubelet[2625]: E0909 00:09:50.627853 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.628184 kubelet[2625]: E0909 00:09:50.628129 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.628184 kubelet[2625]: W0909 00:09:50.628140 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.628255 kubelet[2625]: E0909 00:09:50.628182 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.628548 kubelet[2625]: E0909 00:09:50.628456 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.628548 kubelet[2625]: W0909 00:09:50.628466 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.628654 kubelet[2625]: E0909 00:09:50.628640 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.628747 kubelet[2625]: E0909 00:09:50.628738 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.628798 kubelet[2625]: W0909 00:09:50.628784 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.628894 kubelet[2625]: E0909 00:09:50.628865 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.630228 kubelet[2625]: E0909 00:09:50.630088 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.630228 kubelet[2625]: W0909 00:09:50.630099 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.630348 kubelet[2625]: E0909 00:09:50.630334 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.630446 kubelet[2625]: E0909 00:09:50.630435 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.630511 kubelet[2625]: W0909 00:09:50.630498 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.630667 kubelet[2625]: E0909 00:09:50.630640 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.631008 kubelet[2625]: E0909 00:09:50.630954 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.631008 kubelet[2625]: W0909 00:09:50.630966 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.631079 kubelet[2625]: E0909 00:09:50.631012 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.631360 kubelet[2625]: E0909 00:09:50.631341 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.631360 kubelet[2625]: W0909 00:09:50.631360 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.631501 kubelet[2625]: E0909 00:09:50.631424 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.631569 kubelet[2625]: E0909 00:09:50.631550 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.631569 kubelet[2625]: W0909 00:09:50.631566 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.631628 kubelet[2625]: E0909 00:09:50.631607 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.631854 kubelet[2625]: E0909 00:09:50.631839 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.631854 kubelet[2625]: W0909 00:09:50.631853 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.631957 kubelet[2625]: E0909 00:09:50.631933 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.632120 kubelet[2625]: E0909 00:09:50.632086 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.632120 kubelet[2625]: W0909 00:09:50.632099 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.632188 kubelet[2625]: E0909 00:09:50.632141 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.632416 kubelet[2625]: E0909 00:09:50.632403 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.632447 kubelet[2625]: W0909 00:09:50.632417 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.632759 kubelet[2625]: E0909 00:09:50.632472 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.632759 kubelet[2625]: E0909 00:09:50.632669 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.632759 kubelet[2625]: W0909 00:09:50.632680 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.632896 kubelet[2625]: E0909 00:09:50.632882 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.632981 kubelet[2625]: E0909 00:09:50.632886 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.633049 kubelet[2625]: W0909 00:09:50.633038 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.633204 kubelet[2625]: E0909 00:09:50.633168 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.633340 kubelet[2625]: E0909 00:09:50.633326 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.633409 kubelet[2625]: W0909 00:09:50.633398 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.633546 kubelet[2625]: E0909 00:09:50.633526 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.633858 kubelet[2625]: E0909 00:09:50.633791 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.633858 kubelet[2625]: W0909 00:09:50.633802 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.633858 kubelet[2625]: E0909 00:09:50.633831 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.634259 kubelet[2625]: E0909 00:09:50.634176 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.634259 kubelet[2625]: W0909 00:09:50.634187 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.634259 kubelet[2625]: E0909 00:09:50.634213 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.634583 kubelet[2625]: E0909 00:09:50.634571 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.634670 kubelet[2625]: W0909 00:09:50.634644 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.634821 kubelet[2625]: E0909 00:09:50.634746 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.635115 kubelet[2625]: E0909 00:09:50.635000 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.635115 kubelet[2625]: W0909 00:09:50.635011 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.635115 kubelet[2625]: E0909 00:09:50.635042 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.635327 kubelet[2625]: E0909 00:09:50.635298 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.635395 kubelet[2625]: W0909 00:09:50.635373 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.635522 kubelet[2625]: E0909 00:09:50.635496 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.635810 kubelet[2625]: E0909 00:09:50.635797 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.635930 kubelet[2625]: W0909 00:09:50.635879 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.635977 kubelet[2625]: E0909 00:09:50.635943 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.636257 kubelet[2625]: E0909 00:09:50.636213 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.636257 kubelet[2625]: W0909 00:09:50.636225 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.636257 kubelet[2625]: E0909 00:09:50.636235 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.644065 kubelet[2625]: E0909 00:09:50.644045 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.644065 kubelet[2625]: W0909 00:09:50.644063 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.644167 kubelet[2625]: E0909 00:09:50.644078 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.787304 kubelet[2625]: E0909 00:09:50.784992 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6ncd" podUID="fed5e32a-e444-4618-8bda-5d30e5270140" Sep 9 00:09:50.804133 kubelet[2625]: E0909 00:09:50.804085 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.804133 kubelet[2625]: W0909 00:09:50.804119 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.804133 kubelet[2625]: E0909 00:09:50.804140 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.804706 kubelet[2625]: E0909 00:09:50.804671 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.804706 kubelet[2625]: W0909 00:09:50.804690 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.804706 kubelet[2625]: E0909 00:09:50.804704 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.805316 kubelet[2625]: E0909 00:09:50.805298 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.805316 kubelet[2625]: W0909 00:09:50.805314 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.805404 kubelet[2625]: E0909 00:09:50.805326 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.806239 kubelet[2625]: E0909 00:09:50.805696 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.806239 kubelet[2625]: W0909 00:09:50.805769 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.806239 kubelet[2625]: E0909 00:09:50.805785 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.806568 kubelet[2625]: E0909 00:09:50.806537 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.806568 kubelet[2625]: W0909 00:09:50.806558 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.806568 kubelet[2625]: E0909 00:09:50.806570 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.807737 kubelet[2625]: E0909 00:09:50.807355 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.807737 kubelet[2625]: W0909 00:09:50.807517 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.807737 kubelet[2625]: E0909 00:09:50.807575 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.808867 kubelet[2625]: E0909 00:09:50.808430 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.808867 kubelet[2625]: W0909 00:09:50.808499 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.808867 kubelet[2625]: E0909 00:09:50.808520 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.809620 kubelet[2625]: E0909 00:09:50.809565 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.809768 kubelet[2625]: W0909 00:09:50.809750 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.809920 kubelet[2625]: E0909 00:09:50.809803 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.810381 kubelet[2625]: E0909 00:09:50.810363 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.810381 kubelet[2625]: W0909 00:09:50.810381 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.810467 kubelet[2625]: E0909 00:09:50.810393 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.810601 kubelet[2625]: E0909 00:09:50.810589 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.810601 kubelet[2625]: W0909 00:09:50.810601 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.810664 kubelet[2625]: E0909 00:09:50.810609 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.810762 kubelet[2625]: E0909 00:09:50.810753 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.810762 kubelet[2625]: W0909 00:09:50.810763 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.810824 kubelet[2625]: E0909 00:09:50.810770 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.811133 kubelet[2625]: E0909 00:09:50.811091 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.811170 kubelet[2625]: W0909 00:09:50.811157 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.811199 kubelet[2625]: E0909 00:09:50.811172 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.812134 kubelet[2625]: E0909 00:09:50.812065 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.812134 kubelet[2625]: W0909 00:09:50.812131 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.812310 kubelet[2625]: E0909 00:09:50.812146 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.812433 kubelet[2625]: E0909 00:09:50.812403 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.812433 kubelet[2625]: W0909 00:09:50.812416 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.812433 kubelet[2625]: E0909 00:09:50.812426 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.812707 kubelet[2625]: E0909 00:09:50.812692 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.812707 kubelet[2625]: W0909 00:09:50.812707 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.812764 kubelet[2625]: E0909 00:09:50.812718 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.813018 kubelet[2625]: E0909 00:09:50.813002 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.813018 kubelet[2625]: W0909 00:09:50.813016 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.813091 kubelet[2625]: E0909 00:09:50.813029 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.813386 kubelet[2625]: E0909 00:09:50.813373 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.813422 kubelet[2625]: W0909 00:09:50.813387 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.813422 kubelet[2625]: E0909 00:09:50.813397 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.813564 kubelet[2625]: E0909 00:09:50.813555 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.813564 kubelet[2625]: W0909 00:09:50.813564 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.813621 kubelet[2625]: E0909 00:09:50.813572 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.813886 kubelet[2625]: E0909 00:09:50.813872 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.813886 kubelet[2625]: W0909 00:09:50.813885 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.813977 kubelet[2625]: E0909 00:09:50.813895 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.814347 kubelet[2625]: E0909 00:09:50.814330 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.814347 kubelet[2625]: W0909 00:09:50.814347 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.814425 kubelet[2625]: E0909 00:09:50.814358 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.814717 kubelet[2625]: E0909 00:09:50.814700 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.814717 kubelet[2625]: W0909 00:09:50.814717 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.814789 kubelet[2625]: E0909 00:09:50.814730 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.814789 kubelet[2625]: I0909 00:09:50.814754 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fed5e32a-e444-4618-8bda-5d30e5270140-registration-dir\") pod \"csi-node-driver-s6ncd\" (UID: \"fed5e32a-e444-4618-8bda-5d30e5270140\") " pod="calico-system/csi-node-driver-s6ncd" Sep 9 00:09:50.814998 kubelet[2625]: E0909 00:09:50.814986 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.815031 kubelet[2625]: W0909 00:09:50.815000 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.815031 kubelet[2625]: E0909 00:09:50.815015 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.815080 kubelet[2625]: I0909 00:09:50.815032 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fed5e32a-e444-4618-8bda-5d30e5270140-varrun\") pod \"csi-node-driver-s6ncd\" (UID: \"fed5e32a-e444-4618-8bda-5d30e5270140\") " pod="calico-system/csi-node-driver-s6ncd" Sep 9 00:09:50.815302 kubelet[2625]: E0909 00:09:50.815287 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.815336 kubelet[2625]: W0909 00:09:50.815302 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.815336 kubelet[2625]: E0909 00:09:50.815318 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.815615 kubelet[2625]: E0909 00:09:50.815599 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.815644 kubelet[2625]: W0909 00:09:50.815616 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.815644 kubelet[2625]: E0909 00:09:50.815632 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.815945 kubelet[2625]: E0909 00:09:50.815931 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.815980 kubelet[2625]: W0909 00:09:50.815945 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.815980 kubelet[2625]: E0909 00:09:50.815969 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.816032 kubelet[2625]: I0909 00:09:50.815988 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fed5e32a-e444-4618-8bda-5d30e5270140-socket-dir\") pod \"csi-node-driver-s6ncd\" (UID: \"fed5e32a-e444-4618-8bda-5d30e5270140\") " pod="calico-system/csi-node-driver-s6ncd" Sep 9 00:09:50.816280 kubelet[2625]: E0909 00:09:50.816265 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.816319 kubelet[2625]: W0909 00:09:50.816281 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.816319 kubelet[2625]: E0909 00:09:50.816296 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.816525 kubelet[2625]: E0909 00:09:50.816505 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.816559 kubelet[2625]: W0909 00:09:50.816526 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.816559 kubelet[2625]: E0909 00:09:50.816540 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.816707 kubelet[2625]: E0909 00:09:50.816697 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.816739 kubelet[2625]: W0909 00:09:50.816707 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.816739 kubelet[2625]: E0909 00:09:50.816719 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.816739 kubelet[2625]: I0909 00:09:50.816736 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7jpm\" (UniqueName: \"kubernetes.io/projected/fed5e32a-e444-4618-8bda-5d30e5270140-kube-api-access-l7jpm\") pod \"csi-node-driver-s6ncd\" (UID: \"fed5e32a-e444-4618-8bda-5d30e5270140\") " pod="calico-system/csi-node-driver-s6ncd" Sep 9 00:09:50.816967 kubelet[2625]: E0909 00:09:50.816951 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.816999 kubelet[2625]: W0909 00:09:50.816968 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.816999 kubelet[2625]: E0909 00:09:50.816984 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.817193 kubelet[2625]: E0909 00:09:50.817181 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.817228 kubelet[2625]: W0909 00:09:50.817194 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.817228 kubelet[2625]: E0909 00:09:50.817209 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.817449 kubelet[2625]: E0909 00:09:50.817437 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.817481 kubelet[2625]: W0909 00:09:50.817449 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.817481 kubelet[2625]: E0909 00:09:50.817463 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.817534 kubelet[2625]: I0909 00:09:50.817480 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fed5e32a-e444-4618-8bda-5d30e5270140-kubelet-dir\") pod \"csi-node-driver-s6ncd\" (UID: \"fed5e32a-e444-4618-8bda-5d30e5270140\") " pod="calico-system/csi-node-driver-s6ncd" Sep 9 00:09:50.817772 kubelet[2625]: E0909 00:09:50.817759 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.817803 kubelet[2625]: W0909 00:09:50.817772 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.817803 kubelet[2625]: E0909 00:09:50.817787 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.817980 kubelet[2625]: E0909 00:09:50.817970 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.818011 kubelet[2625]: W0909 00:09:50.817980 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.818011 kubelet[2625]: E0909 00:09:50.817993 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.818228 kubelet[2625]: E0909 00:09:50.818218 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.818255 kubelet[2625]: W0909 00:09:50.818228 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.818255 kubelet[2625]: E0909 00:09:50.818239 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.818446 kubelet[2625]: E0909 00:09:50.818436 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.818477 kubelet[2625]: W0909 00:09:50.818446 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.818477 kubelet[2625]: E0909 00:09:50.818454 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.827051 containerd[1552]: time="2025-09-09T00:09:50.827011579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47d4b,Uid:60f1e1c3-5701-42b4-8f19-59e5aad9baf9,Namespace:calico-system,Attempt:0,}" Sep 9 00:09:50.857448 containerd[1552]: time="2025-09-09T00:09:50.856940344Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:09:50.857448 containerd[1552]: time="2025-09-09T00:09:50.857025914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:09:50.857448 containerd[1552]: time="2025-09-09T00:09:50.857075799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:50.858185 containerd[1552]: time="2025-09-09T00:09:50.858013424Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:09:50.912604 containerd[1552]: time="2025-09-09T00:09:50.912567885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-47d4b,Uid:60f1e1c3-5701-42b4-8f19-59e5aad9baf9,Namespace:calico-system,Attempt:0,} returns sandbox id \"5723abb537229b7c2f01a55edc7bf410c09519091297b0462a03bd3d806e0134\"" Sep 9 00:09:50.918313 kubelet[2625]: E0909 00:09:50.918291 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.918313 kubelet[2625]: W0909 00:09:50.918311 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.918597 kubelet[2625]: E0909 00:09:50.918329 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.918597 kubelet[2625]: E0909 00:09:50.918495 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.918597 kubelet[2625]: W0909 00:09:50.918502 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.918597 kubelet[2625]: E0909 00:09:50.918515 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.918722 kubelet[2625]: E0909 00:09:50.918705 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.918722 kubelet[2625]: W0909 00:09:50.918716 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.918722 kubelet[2625]: E0909 00:09:50.918729 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.919159 kubelet[2625]: E0909 00:09:50.918926 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.919159 kubelet[2625]: W0909 00:09:50.918941 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.919159 kubelet[2625]: E0909 00:09:50.918962 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.919330 kubelet[2625]: E0909 00:09:50.919293 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.919330 kubelet[2625]: W0909 00:09:50.919306 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.919480 kubelet[2625]: E0909 00:09:50.919414 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.919756 kubelet[2625]: E0909 00:09:50.919660 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.919756 kubelet[2625]: W0909 00:09:50.919673 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.919756 kubelet[2625]: E0909 00:09:50.919700 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.920169 kubelet[2625]: E0909 00:09:50.920055 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.920169 kubelet[2625]: W0909 00:09:50.920067 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.920169 kubelet[2625]: E0909 00:09:50.920091 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.920459 kubelet[2625]: E0909 00:09:50.920354 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.920459 kubelet[2625]: W0909 00:09:50.920366 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.920459 kubelet[2625]: E0909 00:09:50.920387 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.920687 kubelet[2625]: E0909 00:09:50.920633 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.920687 kubelet[2625]: W0909 00:09:50.920646 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.920687 kubelet[2625]: E0909 00:09:50.920684 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.921061 kubelet[2625]: E0909 00:09:50.920950 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.921061 kubelet[2625]: W0909 00:09:50.920966 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.921061 kubelet[2625]: E0909 00:09:50.921028 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.921560 kubelet[2625]: E0909 00:09:50.921399 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.921560 kubelet[2625]: W0909 00:09:50.921421 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.921560 kubelet[2625]: E0909 00:09:50.921500 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.921722 kubelet[2625]: E0909 00:09:50.921709 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.921873 kubelet[2625]: W0909 00:09:50.921723 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.921873 kubelet[2625]: E0909 00:09:50.921738 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.922019 kubelet[2625]: E0909 00:09:50.922006 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.922184 kubelet[2625]: W0909 00:09:50.922088 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.922184 kubelet[2625]: E0909 00:09:50.922132 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.922544 kubelet[2625]: E0909 00:09:50.922439 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.922544 kubelet[2625]: W0909 00:09:50.922456 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.922544 kubelet[2625]: E0909 00:09:50.922484 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.922752 kubelet[2625]: E0909 00:09:50.922719 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.922752 kubelet[2625]: W0909 00:09:50.922731 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.922874 kubelet[2625]: E0909 00:09:50.922841 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.922988 kubelet[2625]: E0909 00:09:50.922972 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.922988 kubelet[2625]: W0909 00:09:50.922985 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.923140 kubelet[2625]: E0909 00:09:50.923016 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.923240 kubelet[2625]: E0909 00:09:50.923152 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.923240 kubelet[2625]: W0909 00:09:50.923160 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.923240 kubelet[2625]: E0909 00:09:50.923211 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.923454 kubelet[2625]: E0909 00:09:50.923296 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.923454 kubelet[2625]: W0909 00:09:50.923309 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.923454 kubelet[2625]: E0909 00:09:50.923332 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.923769 kubelet[2625]: E0909 00:09:50.923678 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.923769 kubelet[2625]: W0909 00:09:50.923693 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.923769 kubelet[2625]: E0909 00:09:50.923713 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.923929 kubelet[2625]: E0909 00:09:50.923911 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.923929 kubelet[2625]: W0909 00:09:50.923921 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.924172 kubelet[2625]: E0909 00:09:50.923937 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.924172 kubelet[2625]: E0909 00:09:50.924131 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.924172 kubelet[2625]: W0909 00:09:50.924142 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.924172 kubelet[2625]: E0909 00:09:50.924158 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.924313 kubelet[2625]: E0909 00:09:50.924299 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.924313 kubelet[2625]: W0909 00:09:50.924310 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.924367 kubelet[2625]: E0909 00:09:50.924325 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.924532 kubelet[2625]: E0909 00:09:50.924512 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.924576 kubelet[2625]: W0909 00:09:50.924532 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.924659 kubelet[2625]: E0909 00:09:50.924614 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.924723 kubelet[2625]: E0909 00:09:50.924709 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.924723 kubelet[2625]: W0909 00:09:50.924720 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.925597 kubelet[2625]: E0909 00:09:50.924790 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.925597 kubelet[2625]: E0909 00:09:50.924882 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.925597 kubelet[2625]: W0909 00:09:50.924897 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.925597 kubelet[2625]: E0909 00:09:50.924907 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:50.936146 kubelet[2625]: E0909 00:09:50.936076 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:50.936146 kubelet[2625]: W0909 00:09:50.936094 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:50.936255 kubelet[2625]: E0909 00:09:50.936128 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:51.684287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount877067319.mount: Deactivated successfully. Sep 9 00:09:52.138317 containerd[1552]: time="2025-09-09T00:09:52.138213274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:52.139403 containerd[1552]: time="2025-09-09T00:09:52.139135207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 9 00:09:52.140612 containerd[1552]: time="2025-09-09T00:09:52.140581434Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:52.143099 containerd[1552]: time="2025-09-09T00:09:52.143056445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:52.143806 containerd[1552]: time="2025-09-09T00:09:52.143779278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.588376639s" Sep 9 00:09:52.143850 containerd[1552]: time="2025-09-09T00:09:52.143811401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 9 00:09:52.144982 containerd[1552]: time="2025-09-09T00:09:52.144867188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:09:52.161012 containerd[1552]: time="2025-09-09T00:09:52.160972700Z" level=info msg="CreateContainer within sandbox \"7c02da7d55a8c789aaf13d3bcb136e68e97894192158d44022622b4bd4570f39\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:09:52.172872 containerd[1552]: time="2025-09-09T00:09:52.172842503Z" level=info msg="CreateContainer within sandbox \"7c02da7d55a8c789aaf13d3bcb136e68e97894192158d44022622b4bd4570f39\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4bdbdad8e3c77eab0c29ed0a7e67b1ab51f47d6f74e0db85cc10708d9037f98d\"" Sep 9 00:09:52.173584 containerd[1552]: time="2025-09-09T00:09:52.173556815Z" level=info msg="StartContainer for \"4bdbdad8e3c77eab0c29ed0a7e67b1ab51f47d6f74e0db85cc10708d9037f98d\"" Sep 9 00:09:52.221856 containerd[1552]: time="2025-09-09T00:09:52.221817265Z" level=info msg="StartContainer for \"4bdbdad8e3c77eab0c29ed0a7e67b1ab51f47d6f74e0db85cc10708d9037f98d\" returns successfully" Sep 9 00:09:52.688958 kubelet[2625]: E0909 00:09:52.688919 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6ncd" podUID="fed5e32a-e444-4618-8bda-5d30e5270140" Sep 9 00:09:52.777996 kubelet[2625]: E0909 00:09:52.777921 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:52.791788 kubelet[2625]: I0909 00:09:52.791439 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b67b6b98c-74zs7" podStartSLOduration=1.202081683 podStartE2EDuration="2.791422382s" podCreationTimestamp="2025-09-09 00:09:50 +0000 UTC" firstStartedPulling="2025-09-09 00:09:50.555149731 +0000 UTC m=+19.954581393" lastFinishedPulling="2025-09-09 00:09:52.14449043 +0000 UTC m=+21.543922092" observedRunningTime="2025-09-09 00:09:52.790092407 +0000 UTC m=+22.189524069" watchObservedRunningTime="2025-09-09 00:09:52.791422382 +0000 UTC m=+22.190854004" Sep 9 00:09:52.827424 kubelet[2625]: E0909 00:09:52.827379 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.827424 kubelet[2625]: W0909 00:09:52.827408 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.827424 kubelet[2625]: E0909 00:09:52.827427 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.827663 kubelet[2625]: E0909 00:09:52.827626 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.827663 kubelet[2625]: W0909 00:09:52.827636 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.827663 kubelet[2625]: E0909 00:09:52.827645 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.827806 kubelet[2625]: E0909 00:09:52.827795 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.827806 kubelet[2625]: W0909 00:09:52.827804 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.827853 kubelet[2625]: E0909 00:09:52.827812 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.827981 kubelet[2625]: E0909 00:09:52.827958 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.827981 kubelet[2625]: W0909 00:09:52.827969 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.827981 kubelet[2625]: E0909 00:09:52.827979 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.828165 kubelet[2625]: E0909 00:09:52.828153 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.828165 kubelet[2625]: W0909 00:09:52.828164 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.828216 kubelet[2625]: E0909 00:09:52.828179 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.828341 kubelet[2625]: E0909 00:09:52.828329 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.828341 kubelet[2625]: W0909 00:09:52.828340 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.828421 kubelet[2625]: E0909 00:09:52.828352 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.828512 kubelet[2625]: E0909 00:09:52.828499 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.828512 kubelet[2625]: W0909 00:09:52.828508 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.828587 kubelet[2625]: E0909 00:09:52.828516 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.828753 kubelet[2625]: E0909 00:09:52.828679 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.828753 kubelet[2625]: W0909 00:09:52.828689 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.828753 kubelet[2625]: E0909 00:09:52.828698 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.828919 kubelet[2625]: E0909 00:09:52.828902 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.828919 kubelet[2625]: W0909 00:09:52.828915 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.828973 kubelet[2625]: E0909 00:09:52.828925 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.829114 kubelet[2625]: E0909 00:09:52.829088 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.829114 kubelet[2625]: W0909 00:09:52.829108 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.829166 kubelet[2625]: E0909 00:09:52.829116 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.829276 kubelet[2625]: E0909 00:09:52.829265 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.829276 kubelet[2625]: W0909 00:09:52.829275 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.829322 kubelet[2625]: E0909 00:09:52.829282 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.829435 kubelet[2625]: E0909 00:09:52.829424 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.829460 kubelet[2625]: W0909 00:09:52.829434 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.829460 kubelet[2625]: E0909 00:09:52.829442 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.829608 kubelet[2625]: E0909 00:09:52.829598 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.829633 kubelet[2625]: W0909 00:09:52.829608 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.829633 kubelet[2625]: E0909 00:09:52.829615 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.829779 kubelet[2625]: E0909 00:09:52.829767 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.829779 kubelet[2625]: W0909 00:09:52.829776 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.829833 kubelet[2625]: E0909 00:09:52.829785 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.829936 kubelet[2625]: E0909 00:09:52.829924 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.829936 kubelet[2625]: W0909 00:09:52.829934 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.829980 kubelet[2625]: E0909 00:09:52.829942 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.833563 kubelet[2625]: E0909 00:09:52.833542 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.833563 kubelet[2625]: W0909 00:09:52.833561 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.833658 kubelet[2625]: E0909 00:09:52.833585 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.833806 kubelet[2625]: E0909 00:09:52.833795 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.833806 kubelet[2625]: W0909 00:09:52.833806 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.833862 kubelet[2625]: E0909 00:09:52.833820 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.834005 kubelet[2625]: E0909 00:09:52.833995 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.834005 kubelet[2625]: W0909 00:09:52.834004 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.834067 kubelet[2625]: E0909 00:09:52.834016 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.834193 kubelet[2625]: E0909 00:09:52.834178 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.834193 kubelet[2625]: W0909 00:09:52.834192 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.834252 kubelet[2625]: E0909 00:09:52.834204 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.834354 kubelet[2625]: E0909 00:09:52.834339 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.834354 kubelet[2625]: W0909 00:09:52.834353 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.834405 kubelet[2625]: E0909 00:09:52.834365 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.834506 kubelet[2625]: E0909 00:09:52.834492 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.834506 kubelet[2625]: W0909 00:09:52.834506 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.834563 kubelet[2625]: E0909 00:09:52.834518 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.834696 kubelet[2625]: E0909 00:09:52.834686 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.834696 kubelet[2625]: W0909 00:09:52.834696 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.834754 kubelet[2625]: E0909 00:09:52.834716 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.835067 kubelet[2625]: E0909 00:09:52.834954 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.835067 kubelet[2625]: W0909 00:09:52.834971 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.835067 kubelet[2625]: E0909 00:09:52.834991 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.835252 kubelet[2625]: E0909 00:09:52.835240 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.835311 kubelet[2625]: W0909 00:09:52.835300 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.835440 kubelet[2625]: E0909 00:09:52.835395 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.835622 kubelet[2625]: E0909 00:09:52.835535 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.835622 kubelet[2625]: W0909 00:09:52.835547 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.835622 kubelet[2625]: E0909 00:09:52.835571 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.835776 kubelet[2625]: E0909 00:09:52.835764 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.835832 kubelet[2625]: W0909 00:09:52.835821 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.835887 kubelet[2625]: E0909 00:09:52.835877 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.836216 kubelet[2625]: E0909 00:09:52.836081 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.836216 kubelet[2625]: W0909 00:09:52.836093 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.836216 kubelet[2625]: E0909 00:09:52.836125 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.836378 kubelet[2625]: E0909 00:09:52.836365 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.836428 kubelet[2625]: W0909 00:09:52.836417 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.836491 kubelet[2625]: E0909 00:09:52.836481 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.836753 kubelet[2625]: E0909 00:09:52.836732 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.836753 kubelet[2625]: W0909 00:09:52.836747 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.836827 kubelet[2625]: E0909 00:09:52.836762 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.836958 kubelet[2625]: E0909 00:09:52.836949 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.836958 kubelet[2625]: W0909 00:09:52.836958 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.837015 kubelet[2625]: E0909 00:09:52.836969 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.837202 kubelet[2625]: E0909 00:09:52.837191 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.837202 kubelet[2625]: W0909 00:09:52.837201 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.837269 kubelet[2625]: E0909 00:09:52.837213 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.837675 kubelet[2625]: E0909 00:09:52.837557 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.837675 kubelet[2625]: W0909 00:09:52.837571 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.837675 kubelet[2625]: E0909 00:09:52.837588 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:52.837867 kubelet[2625]: E0909 00:09:52.837855 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:09:52.837933 kubelet[2625]: W0909 00:09:52.837922 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:09:52.837983 kubelet[2625]: E0909 00:09:52.837973 2625 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:09:53.460291 containerd[1552]: time="2025-09-09T00:09:53.460241272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:53.460884 containerd[1552]: time="2025-09-09T00:09:53.460806007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 9 00:09:53.461671 containerd[1552]: time="2025-09-09T00:09:53.461640848Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:53.463855 containerd[1552]: time="2025-09-09T00:09:53.463825979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:53.464777 containerd[1552]: time="2025-09-09T00:09:53.464523007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.319622495s" Sep 9 00:09:53.464777 containerd[1552]: time="2025-09-09T00:09:53.464568331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 9 00:09:53.467719 containerd[1552]: time="2025-09-09T00:09:53.467677072Z" level=info msg="CreateContainer within sandbox \"5723abb537229b7c2f01a55edc7bf410c09519091297b0462a03bd3d806e0134\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:09:53.487142 containerd[1552]: time="2025-09-09T00:09:53.487091193Z" level=info msg="CreateContainer within sandbox \"5723abb537229b7c2f01a55edc7bf410c09519091297b0462a03bd3d806e0134\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"72b0f0f5fb447960259df651fd6b706c585db8a635b7698b2bd5607cf4c2b152\"" Sep 9 00:09:53.487611 containerd[1552]: time="2025-09-09T00:09:53.487584201Z" level=info msg="StartContainer for \"72b0f0f5fb447960259df651fd6b706c585db8a635b7698b2bd5607cf4c2b152\"" Sep 9 00:09:53.534566 containerd[1552]: time="2025-09-09T00:09:53.534528389Z" level=info msg="StartContainer for \"72b0f0f5fb447960259df651fd6b706c585db8a635b7698b2bd5607cf4c2b152\" returns successfully" Sep 9 00:09:53.575248 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72b0f0f5fb447960259df651fd6b706c585db8a635b7698b2bd5607cf4c2b152-rootfs.mount: Deactivated successfully. Sep 9 00:09:53.589387 containerd[1552]: time="2025-09-09T00:09:53.585620900Z" level=info msg="shim disconnected" id=72b0f0f5fb447960259df651fd6b706c585db8a635b7698b2bd5607cf4c2b152 namespace=k8s.io Sep 9 00:09:53.589506 containerd[1552]: time="2025-09-09T00:09:53.589411307Z" level=warning msg="cleaning up after shim disconnected" id=72b0f0f5fb447960259df651fd6b706c585db8a635b7698b2bd5607cf4c2b152 namespace=k8s.io Sep 9 00:09:53.589506 containerd[1552]: time="2025-09-09T00:09:53.589425668Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:09:53.783934 kubelet[2625]: I0909 00:09:53.783700 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:09:53.784641 kubelet[2625]: E0909 00:09:53.784074 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:53.785095 containerd[1552]: time="2025-09-09T00:09:53.784879885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:09:54.690562 kubelet[2625]: E0909 00:09:54.690356 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6ncd" podUID="fed5e32a-e444-4618-8bda-5d30e5270140" Sep 9 00:09:56.321482 containerd[1552]: time="2025-09-09T00:09:56.321430738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:56.322074 containerd[1552]: time="2025-09-09T00:09:56.322037150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 9 00:09:56.322870 containerd[1552]: time="2025-09-09T00:09:56.322839978Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:56.324970 containerd[1552]: time="2025-09-09T00:09:56.324937197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:09:56.326402 containerd[1552]: time="2025-09-09T00:09:56.326155981Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.541231012s" Sep 9 00:09:56.326402 containerd[1552]: time="2025-09-09T00:09:56.326191023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 9 00:09:56.328053 containerd[1552]: time="2025-09-09T00:09:56.328009618Z" level=info msg="CreateContainer within sandbox \"5723abb537229b7c2f01a55edc7bf410c09519091297b0462a03bd3d806e0134\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:09:56.343218 containerd[1552]: time="2025-09-09T00:09:56.343070981Z" level=info msg="CreateContainer within sandbox \"5723abb537229b7c2f01a55edc7bf410c09519091297b0462a03bd3d806e0134\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b86e048216318a20f671fcf5d33cf5d5697dc43575217aac3d5a1270a4a3e44a\"" Sep 9 00:09:56.344936 containerd[1552]: time="2025-09-09T00:09:56.344884135Z" level=info msg="StartContainer for \"b86e048216318a20f671fcf5d33cf5d5697dc43575217aac3d5a1270a4a3e44a\"" Sep 9 00:09:56.395835 containerd[1552]: time="2025-09-09T00:09:56.395789511Z" level=info msg="StartContainer for \"b86e048216318a20f671fcf5d33cf5d5697dc43575217aac3d5a1270a4a3e44a\" returns successfully" Sep 9 00:09:56.688724 kubelet[2625]: E0909 00:09:56.688684 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6ncd" podUID="fed5e32a-e444-4618-8bda-5d30e5270140" Sep 9 00:09:56.984458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b86e048216318a20f671fcf5d33cf5d5697dc43575217aac3d5a1270a4a3e44a-rootfs.mount: Deactivated successfully. Sep 9 00:09:56.987788 containerd[1552]: time="2025-09-09T00:09:56.987735522Z" level=info msg="shim disconnected" id=b86e048216318a20f671fcf5d33cf5d5697dc43575217aac3d5a1270a4a3e44a namespace=k8s.io Sep 9 00:09:56.987788 containerd[1552]: time="2025-09-09T00:09:56.987787006Z" level=warning msg="cleaning up after shim disconnected" id=b86e048216318a20f671fcf5d33cf5d5697dc43575217aac3d5a1270a4a3e44a namespace=k8s.io Sep 9 00:09:56.987970 containerd[1552]: time="2025-09-09T00:09:56.987794927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:09:56.994006 kubelet[2625]: I0909 00:09:56.993980 2625 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 00:09:57.164165 kubelet[2625]: I0909 00:09:57.164088 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bad4beb6-5789-44ba-9fa9-719ce9756876-goldmane-ca-bundle\") pod \"goldmane-7988f88666-bdppx\" (UID: \"bad4beb6-5789-44ba-9fa9-719ce9756876\") " pod="calico-system/goldmane-7988f88666-bdppx" Sep 9 00:09:57.164165 kubelet[2625]: I0909 00:09:57.164159 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grrj\" (UniqueName: \"kubernetes.io/projected/bad4beb6-5789-44ba-9fa9-719ce9756876-kube-api-access-9grrj\") pod \"goldmane-7988f88666-bdppx\" (UID: \"bad4beb6-5789-44ba-9fa9-719ce9756876\") " pod="calico-system/goldmane-7988f88666-bdppx" Sep 9 00:09:57.164327 kubelet[2625]: I0909 00:09:57.164182 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed0dfce1-e4dc-4958-825b-d1a4c64907b2-config-volume\") pod \"coredns-7c65d6cfc9-gjbqz\" (UID: \"ed0dfce1-e4dc-4958-825b-d1a4c64907b2\") " pod="kube-system/coredns-7c65d6cfc9-gjbqz" Sep 9 00:09:57.164327 kubelet[2625]: I0909 00:09:57.164202 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjk8r\" (UniqueName: \"kubernetes.io/projected/ed0dfce1-e4dc-4958-825b-d1a4c64907b2-kube-api-access-vjk8r\") pod \"coredns-7c65d6cfc9-gjbqz\" (UID: \"ed0dfce1-e4dc-4958-825b-d1a4c64907b2\") " pod="kube-system/coredns-7c65d6cfc9-gjbqz" Sep 9 00:09:57.164327 kubelet[2625]: I0909 00:09:57.164219 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64vvp\" (UniqueName: \"kubernetes.io/projected/9bf6427e-824b-4ca1-8c25-8463e096ff46-kube-api-access-64vvp\") pod \"calico-apiserver-d475789b5-j7mc9\" (UID: \"9bf6427e-824b-4ca1-8c25-8463e096ff46\") " pod="calico-apiserver/calico-apiserver-d475789b5-j7mc9" Sep 9 00:09:57.164327 kubelet[2625]: I0909 00:09:57.164237 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4pk\" (UniqueName: \"kubernetes.io/projected/bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b-kube-api-access-qb4pk\") pod \"calico-apiserver-d475789b5-phkbj\" (UID: \"bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b\") " pod="calico-apiserver/calico-apiserver-d475789b5-phkbj" Sep 9 00:09:57.164327 kubelet[2625]: I0909 00:09:57.164277 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9bf6427e-824b-4ca1-8c25-8463e096ff46-calico-apiserver-certs\") pod \"calico-apiserver-d475789b5-j7mc9\" (UID: \"9bf6427e-824b-4ca1-8c25-8463e096ff46\") " pod="calico-apiserver/calico-apiserver-d475789b5-j7mc9" Sep 9 00:09:57.164444 kubelet[2625]: I0909 00:09:57.164323 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bad4beb6-5789-44ba-9fa9-719ce9756876-config\") pod \"goldmane-7988f88666-bdppx\" (UID: \"bad4beb6-5789-44ba-9fa9-719ce9756876\") " pod="calico-system/goldmane-7988f88666-bdppx" Sep 9 00:09:57.164444 kubelet[2625]: I0909 00:09:57.164356 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21616835-9c34-40e5-986f-0d6c83c914be-whisker-ca-bundle\") pod \"whisker-75d7d5c5c4-2kg6j\" (UID: \"21616835-9c34-40e5-986f-0d6c83c914be\") " pod="calico-system/whisker-75d7d5c5c4-2kg6j" Sep 9 00:09:57.164444 kubelet[2625]: I0909 00:09:57.164401 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a80d3a72-2041-413d-bbb6-13849b69be7e-tigera-ca-bundle\") pod \"calico-kube-controllers-6d49f97799-ghr5z\" (UID: \"a80d3a72-2041-413d-bbb6-13849b69be7e\") " pod="calico-system/calico-kube-controllers-6d49f97799-ghr5z" Sep 9 00:09:57.164511 kubelet[2625]: I0909 00:09:57.164441 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/571f56a2-cb8d-4d68-9002-2ce4d310ff3f-config-volume\") pod \"coredns-7c65d6cfc9-8wxhk\" (UID: \"571f56a2-cb8d-4d68-9002-2ce4d310ff3f\") " pod="kube-system/coredns-7c65d6cfc9-8wxhk" Sep 9 00:09:57.164511 kubelet[2625]: I0909 00:09:57.164474 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr5wr\" (UniqueName: \"kubernetes.io/projected/571f56a2-cb8d-4d68-9002-2ce4d310ff3f-kube-api-access-mr5wr\") pod \"coredns-7c65d6cfc9-8wxhk\" (UID: \"571f56a2-cb8d-4d68-9002-2ce4d310ff3f\") " pod="kube-system/coredns-7c65d6cfc9-8wxhk" Sep 9 00:09:57.164511 kubelet[2625]: I0909 00:09:57.164501 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21616835-9c34-40e5-986f-0d6c83c914be-whisker-backend-key-pair\") pod \"whisker-75d7d5c5c4-2kg6j\" (UID: \"21616835-9c34-40e5-986f-0d6c83c914be\") " pod="calico-system/whisker-75d7d5c5c4-2kg6j" Sep 9 00:09:57.164574 kubelet[2625]: I0909 00:09:57.164519 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h59w4\" (UniqueName: \"kubernetes.io/projected/21616835-9c34-40e5-986f-0d6c83c914be-kube-api-access-h59w4\") pod \"whisker-75d7d5c5c4-2kg6j\" (UID: \"21616835-9c34-40e5-986f-0d6c83c914be\") " pod="calico-system/whisker-75d7d5c5c4-2kg6j" Sep 9 00:09:57.164574 kubelet[2625]: I0909 00:09:57.164535 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk94z\" (UniqueName: \"kubernetes.io/projected/a80d3a72-2041-413d-bbb6-13849b69be7e-kube-api-access-rk94z\") pod \"calico-kube-controllers-6d49f97799-ghr5z\" (UID: \"a80d3a72-2041-413d-bbb6-13849b69be7e\") " pod="calico-system/calico-kube-controllers-6d49f97799-ghr5z" Sep 9 00:09:57.164574 kubelet[2625]: I0909 00:09:57.164552 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b-calico-apiserver-certs\") pod \"calico-apiserver-d475789b5-phkbj\" (UID: \"bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b\") " pod="calico-apiserver/calico-apiserver-d475789b5-phkbj" Sep 9 00:09:57.164574 kubelet[2625]: I0909 00:09:57.164566 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/bad4beb6-5789-44ba-9fa9-719ce9756876-goldmane-key-pair\") pod \"goldmane-7988f88666-bdppx\" (UID: \"bad4beb6-5789-44ba-9fa9-719ce9756876\") " pod="calico-system/goldmane-7988f88666-bdppx" Sep 9 00:09:57.337511 containerd[1552]: time="2025-09-09T00:09:57.337320980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d475789b5-j7mc9,Uid:9bf6427e-824b-4ca1-8c25-8463e096ff46,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:09:57.344385 containerd[1552]: time="2025-09-09T00:09:57.344308071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d475789b5-phkbj,Uid:bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:09:57.355040 kubelet[2625]: E0909 00:09:57.354814 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:57.355254 containerd[1552]: time="2025-09-09T00:09:57.355223483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8wxhk,Uid:571f56a2-cb8d-4d68-9002-2ce4d310ff3f,Namespace:kube-system,Attempt:0,}" Sep 9 00:09:57.359342 containerd[1552]: time="2025-09-09T00:09:57.359294775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75d7d5c5c4-2kg6j,Uid:21616835-9c34-40e5-986f-0d6c83c914be,Namespace:calico-system,Attempt:0,}" Sep 9 00:09:57.361596 kubelet[2625]: E0909 00:09:57.361420 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:09:57.361914 containerd[1552]: time="2025-09-09T00:09:57.361788099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gjbqz,Uid:ed0dfce1-e4dc-4958-825b-d1a4c64907b2,Namespace:kube-system,Attempt:0,}" Sep 9 00:09:57.362635 containerd[1552]: time="2025-09-09T00:09:57.362259418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bdppx,Uid:bad4beb6-5789-44ba-9fa9-719ce9756876,Namespace:calico-system,Attempt:0,}" Sep 9 00:09:57.362635 containerd[1552]: time="2025-09-09T00:09:57.362447793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d49f97799-ghr5z,Uid:a80d3a72-2041-413d-bbb6-13849b69be7e,Namespace:calico-system,Attempt:0,}" Sep 9 00:09:57.508152 containerd[1552]: time="2025-09-09T00:09:57.508002810Z" level=error msg="Failed to destroy network for sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.511167 containerd[1552]: time="2025-09-09T00:09:57.511072741Z" level=error msg="encountered an error cleaning up failed sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.511280 containerd[1552]: time="2025-09-09T00:09:57.511176149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gjbqz,Uid:ed0dfce1-e4dc-4958-825b-d1a4c64907b2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.512890 kubelet[2625]: E0909 00:09:57.512845 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.513151 containerd[1552]: time="2025-09-09T00:09:57.513081025Z" level=error msg="Failed to destroy network for sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.514753 containerd[1552]: time="2025-09-09T00:09:57.514390572Z" level=error msg="encountered an error cleaning up failed sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.514875 containerd[1552]: time="2025-09-09T00:09:57.514843129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d49f97799-ghr5z,Uid:a80d3a72-2041-413d-bbb6-13849b69be7e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.515443 kubelet[2625]: E0909 00:09:57.515286 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.515443 kubelet[2625]: E0909 00:09:57.515373 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d49f97799-ghr5z" Sep 9 00:09:57.515796 kubelet[2625]: E0909 00:09:57.515696 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gjbqz" Sep 9 00:09:57.516346 kubelet[2625]: E0909 00:09:57.516319 2625 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gjbqz" Sep 9 00:09:57.516512 kubelet[2625]: E0909 00:09:57.516483 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gjbqz_kube-system(ed0dfce1-e4dc-4958-825b-d1a4c64907b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gjbqz_kube-system(ed0dfce1-e4dc-4958-825b-d1a4c64907b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gjbqz" podUID="ed0dfce1-e4dc-4958-825b-d1a4c64907b2" Sep 9 00:09:57.518152 kubelet[2625]: E0909 00:09:57.518117 2625 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d49f97799-ghr5z" Sep 9 00:09:57.518233 kubelet[2625]: E0909 00:09:57.518183 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d49f97799-ghr5z_calico-system(a80d3a72-2041-413d-bbb6-13849b69be7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d49f97799-ghr5z_calico-system(a80d3a72-2041-413d-bbb6-13849b69be7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d49f97799-ghr5z" podUID="a80d3a72-2041-413d-bbb6-13849b69be7e" Sep 9 00:09:57.521259 containerd[1552]: time="2025-09-09T00:09:57.521221490Z" level=error msg="Failed to destroy network for sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.521559 containerd[1552]: time="2025-09-09T00:09:57.521531795Z" level=error msg="encountered an error cleaning up failed sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.521600 containerd[1552]: time="2025-09-09T00:09:57.521578999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bdppx,Uid:bad4beb6-5789-44ba-9fa9-719ce9756876,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.521911 kubelet[2625]: E0909 00:09:57.521764 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.521911 kubelet[2625]: E0909 00:09:57.521808 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-bdppx" Sep 9 00:09:57.521911 kubelet[2625]: E0909 00:09:57.521835 2625 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-bdppx" Sep 9 00:09:57.522037 containerd[1552]: time="2025-09-09T00:09:57.521768775Z" level=error msg="Failed to destroy network for sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.522157 kubelet[2625]: E0909 00:09:57.521878 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-bdppx_calico-system(bad4beb6-5789-44ba-9fa9-719ce9756876)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-bdppx_calico-system(bad4beb6-5789-44ba-9fa9-719ce9756876)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-bdppx" podUID="bad4beb6-5789-44ba-9fa9-719ce9756876" Sep 9 00:09:57.522342 containerd[1552]: time="2025-09-09T00:09:57.522299658Z" level=error msg="encountered an error cleaning up failed sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.522382 containerd[1552]: time="2025-09-09T00:09:57.522349262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8wxhk,Uid:571f56a2-cb8d-4d68-9002-2ce4d310ff3f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.522497 kubelet[2625]: E0909 00:09:57.522472 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.522531 kubelet[2625]: E0909 00:09:57.522509 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8wxhk" Sep 9 00:09:57.522602 kubelet[2625]: E0909 00:09:57.522529 2625 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8wxhk" Sep 9 00:09:57.522602 kubelet[2625]: E0909 00:09:57.522557 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8wxhk_kube-system(571f56a2-cb8d-4d68-9002-2ce4d310ff3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8wxhk_kube-system(571f56a2-cb8d-4d68-9002-2ce4d310ff3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8wxhk" podUID="571f56a2-cb8d-4d68-9002-2ce4d310ff3f" Sep 9 00:09:57.535373 containerd[1552]: time="2025-09-09T00:09:57.535239676Z" level=error msg="Failed to destroy network for sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.535667 containerd[1552]: time="2025-09-09T00:09:57.535644349Z" level=error msg="Failed to destroy network for sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.535933 containerd[1552]: time="2025-09-09T00:09:57.535910050Z" level=error msg="encountered an error cleaning up failed sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.536184 containerd[1552]: time="2025-09-09T00:09:57.536161071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d475789b5-j7mc9,Uid:9bf6427e-824b-4ca1-8c25-8463e096ff46,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.536462 containerd[1552]: time="2025-09-09T00:09:57.536313323Z" level=error msg="encountered an error cleaning up failed sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.536462 containerd[1552]: time="2025-09-09T00:09:57.536415612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d475789b5-phkbj,Uid:bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.536942 kubelet[2625]: E0909 00:09:57.536908 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.537005 kubelet[2625]: E0909 00:09:57.536963 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d475789b5-phkbj" Sep 9 00:09:57.537005 kubelet[2625]: E0909 00:09:57.536987 2625 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d475789b5-phkbj" Sep 9 00:09:57.537051 kubelet[2625]: E0909 00:09:57.537021 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d475789b5-phkbj_calico-apiserver(bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d475789b5-phkbj_calico-apiserver(bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d475789b5-phkbj" podUID="bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b" Sep 9 00:09:57.537443 kubelet[2625]: E0909 00:09:57.536910 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.537443 kubelet[2625]: E0909 00:09:57.537381 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d475789b5-j7mc9" Sep 9 00:09:57.537443 kubelet[2625]: E0909 00:09:57.537402 2625 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d475789b5-j7mc9" Sep 9 00:09:57.538252 kubelet[2625]: E0909 00:09:57.537459 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d475789b5-j7mc9_calico-apiserver(9bf6427e-824b-4ca1-8c25-8463e096ff46)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d475789b5-j7mc9_calico-apiserver(9bf6427e-824b-4ca1-8c25-8463e096ff46)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d475789b5-j7mc9" podUID="9bf6427e-824b-4ca1-8c25-8463e096ff46" Sep 9 00:09:57.543076 containerd[1552]: time="2025-09-09T00:09:57.543038193Z" level=error msg="Failed to destroy network for sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.543422 containerd[1552]: time="2025-09-09T00:09:57.543387302Z" level=error msg="encountered an error cleaning up failed sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.543464 containerd[1552]: time="2025-09-09T00:09:57.543441026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75d7d5c5c4-2kg6j,Uid:21616835-9c34-40e5-986f-0d6c83c914be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.543648 kubelet[2625]: E0909 00:09:57.543621 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.543683 kubelet[2625]: E0909 00:09:57.543665 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75d7d5c5c4-2kg6j" Sep 9 00:09:57.543715 kubelet[2625]: E0909 00:09:57.543682 2625 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75d7d5c5c4-2kg6j" Sep 9 00:09:57.543747 kubelet[2625]: E0909 00:09:57.543716 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75d7d5c5c4-2kg6j_calico-system(21616835-9c34-40e5-986f-0d6c83c914be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75d7d5c5c4-2kg6j_calico-system(21616835-9c34-40e5-986f-0d6c83c914be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75d7d5c5c4-2kg6j" podUID="21616835-9c34-40e5-986f-0d6c83c914be" Sep 9 00:09:57.793075 kubelet[2625]: I0909 00:09:57.793047 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:09:57.794361 containerd[1552]: time="2025-09-09T00:09:57.793938259Z" level=info msg="StopPodSandbox for \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\"" Sep 9 00:09:57.794361 containerd[1552]: time="2025-09-09T00:09:57.794119314Z" level=info msg="Ensure that sandbox 670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5 in task-service has been cleanup successfully" Sep 9 00:09:57.794596 kubelet[2625]: I0909 00:09:57.794025 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:09:57.795013 containerd[1552]: time="2025-09-09T00:09:57.794893898Z" level=info msg="StopPodSandbox for \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\"" Sep 9 00:09:57.795070 containerd[1552]: time="2025-09-09T00:09:57.795034149Z" level=info msg="Ensure that sandbox 95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71 in task-service has been cleanup successfully" Sep 9 00:09:57.795736 kubelet[2625]: I0909 00:09:57.795700 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:09:57.796302 containerd[1552]: time="2025-09-09T00:09:57.796214485Z" level=info msg="StopPodSandbox for \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\"" Sep 9 00:09:57.797923 kubelet[2625]: I0909 00:09:57.797033 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:09:57.797989 containerd[1552]: time="2025-09-09T00:09:57.797425424Z" level=info msg="StopPodSandbox for \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\"" Sep 9 00:09:57.798217 containerd[1552]: time="2025-09-09T00:09:57.798108400Z" level=info msg="Ensure that sandbox 1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37 in task-service has been cleanup successfully" Sep 9 00:09:57.798478 containerd[1552]: time="2025-09-09T00:09:57.798268533Z" level=info msg="Ensure that sandbox 00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985 in task-service has been cleanup successfully" Sep 9 00:09:57.799502 kubelet[2625]: I0909 00:09:57.798986 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:09:57.800319 containerd[1552]: time="2025-09-09T00:09:57.799810539Z" level=info msg="StopPodSandbox for \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\"" Sep 9 00:09:57.800456 kubelet[2625]: I0909 00:09:57.800422 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:09:57.800496 containerd[1552]: time="2025-09-09T00:09:57.800465433Z" level=info msg="Ensure that sandbox f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242 in task-service has been cleanup successfully" Sep 9 00:09:57.801291 containerd[1552]: time="2025-09-09T00:09:57.801258938Z" level=info msg="StopPodSandbox for \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\"" Sep 9 00:09:57.803163 containerd[1552]: time="2025-09-09T00:09:57.802399311Z" level=info msg="Ensure that sandbox cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e in task-service has been cleanup successfully" Sep 9 00:09:57.806082 containerd[1552]: time="2025-09-09T00:09:57.805707021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:09:57.808407 kubelet[2625]: I0909 00:09:57.808010 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:09:57.811472 containerd[1552]: time="2025-09-09T00:09:57.811438770Z" level=info msg="StopPodSandbox for \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\"" Sep 9 00:09:57.812041 containerd[1552]: time="2025-09-09T00:09:57.812018217Z" level=info msg="Ensure that sandbox ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6 in task-service has been cleanup successfully" Sep 9 00:09:57.845638 containerd[1552]: time="2025-09-09T00:09:57.845580480Z" level=error msg="StopPodSandbox for \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\" failed" error="failed to destroy network for sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.846085 kubelet[2625]: E0909 00:09:57.845918 2625 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:09:57.846085 kubelet[2625]: E0909 00:09:57.845974 2625 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71"} Sep 9 00:09:57.846085 kubelet[2625]: E0909 00:09:57.846035 2625 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9bf6427e-824b-4ca1-8c25-8463e096ff46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:09:57.846085 kubelet[2625]: E0909 00:09:57.846059 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9bf6427e-824b-4ca1-8c25-8463e096ff46\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d475789b5-j7mc9" podUID="9bf6427e-824b-4ca1-8c25-8463e096ff46" Sep 9 00:09:57.856018 containerd[1552]: time="2025-09-09T00:09:57.855939767Z" level=error msg="StopPodSandbox for \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\" failed" error="failed to destroy network for sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.856397 kubelet[2625]: E0909 00:09:57.856211 2625 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:09:57.856397 kubelet[2625]: E0909 00:09:57.856285 2625 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5"} Sep 9 00:09:57.856397 kubelet[2625]: E0909 00:09:57.856315 2625 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:09:57.856397 kubelet[2625]: E0909 00:09:57.856343 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d475789b5-phkbj" podUID="bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b" Sep 9 00:09:57.857163 containerd[1552]: time="2025-09-09T00:09:57.857123304Z" level=error msg="StopPodSandbox for \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\" failed" error="failed to destroy network for sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.857270 containerd[1552]: time="2025-09-09T00:09:57.857242073Z" level=error msg="StopPodSandbox for \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\" failed" error="failed to destroy network for sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.857397 kubelet[2625]: E0909 00:09:57.857371 2625 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:09:57.857440 kubelet[2625]: E0909 00:09:57.857404 2625 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6"} Sep 9 00:09:57.857462 kubelet[2625]: E0909 00:09:57.857432 2625 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bad4beb6-5789-44ba-9fa9-719ce9756876\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:09:57.857506 kubelet[2625]: E0909 00:09:57.857457 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bad4beb6-5789-44ba-9fa9-719ce9756876\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-bdppx" podUID="bad4beb6-5789-44ba-9fa9-719ce9756876" Sep 9 00:09:57.857733 kubelet[2625]: E0909 00:09:57.857583 2625 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:09:57.857733 kubelet[2625]: E0909 00:09:57.857615 2625 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37"} Sep 9 00:09:57.857733 kubelet[2625]: E0909 00:09:57.857662 2625 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21616835-9c34-40e5-986f-0d6c83c914be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:09:57.857733 kubelet[2625]: E0909 00:09:57.857681 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21616835-9c34-40e5-986f-0d6c83c914be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75d7d5c5c4-2kg6j" podUID="21616835-9c34-40e5-986f-0d6c83c914be" Sep 9 00:09:57.857987 containerd[1552]: time="2025-09-09T00:09:57.857904287Z" level=error msg="StopPodSandbox for \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\" failed" error="failed to destroy network for sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.858092 kubelet[2625]: E0909 00:09:57.858052 2625 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:09:57.858151 kubelet[2625]: E0909 00:09:57.858111 2625 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e"} Sep 9 00:09:57.858180 kubelet[2625]: E0909 00:09:57.858150 2625 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed0dfce1-e4dc-4958-825b-d1a4c64907b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:09:57.858227 kubelet[2625]: E0909 00:09:57.858172 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed0dfce1-e4dc-4958-825b-d1a4c64907b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gjbqz" podUID="ed0dfce1-e4dc-4958-825b-d1a4c64907b2" Sep 9 00:09:57.859380 containerd[1552]: time="2025-09-09T00:09:57.859345765Z" level=error msg="StopPodSandbox for \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\" failed" error="failed to destroy network for sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.859538 kubelet[2625]: E0909 00:09:57.859490 2625 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:09:57.859538 kubelet[2625]: E0909 00:09:57.859524 2625 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242"} Sep 9 00:09:57.859620 kubelet[2625]: E0909 00:09:57.859552 2625 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a80d3a72-2041-413d-bbb6-13849b69be7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:09:57.859620 kubelet[2625]: E0909 00:09:57.859570 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a80d3a72-2041-413d-bbb6-13849b69be7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d49f97799-ghr5z" podUID="a80d3a72-2041-413d-bbb6-13849b69be7e" Sep 9 00:09:57.865030 containerd[1552]: time="2025-09-09T00:09:57.864992027Z" level=error msg="StopPodSandbox for \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\" failed" error="failed to destroy network for sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:57.865225 kubelet[2625]: E0909 00:09:57.865196 2625 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:09:57.865385 kubelet[2625]: E0909 00:09:57.865299 2625 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985"} Sep 9 00:09:57.865385 kubelet[2625]: E0909 00:09:57.865329 2625 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"571f56a2-cb8d-4d68-9002-2ce4d310ff3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:09:57.865385 kubelet[2625]: E0909 00:09:57.865347 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"571f56a2-cb8d-4d68-9002-2ce4d310ff3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8wxhk" podUID="571f56a2-cb8d-4d68-9002-2ce4d310ff3f" Sep 9 00:09:58.337815 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985-shm.mount: Deactivated successfully. Sep 9 00:09:58.337959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5-shm.mount: Deactivated successfully. Sep 9 00:09:58.338047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71-shm.mount: Deactivated successfully. Sep 9 00:09:58.696530 containerd[1552]: time="2025-09-09T00:09:58.696486030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6ncd,Uid:fed5e32a-e444-4618-8bda-5d30e5270140,Namespace:calico-system,Attempt:0,}" Sep 9 00:09:58.759306 containerd[1552]: time="2025-09-09T00:09:58.759261998Z" level=error msg="Failed to destroy network for sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:58.760183 containerd[1552]: time="2025-09-09T00:09:58.760147668Z" level=error msg="encountered an error cleaning up failed sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:58.760241 containerd[1552]: time="2025-09-09T00:09:58.760197552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6ncd,Uid:fed5e32a-e444-4618-8bda-5d30e5270140,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:58.760750 kubelet[2625]: E0909 00:09:58.760399 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:58.760750 kubelet[2625]: E0909 00:09:58.760460 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6ncd" Sep 9 00:09:58.760750 kubelet[2625]: E0909 00:09:58.760482 2625 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6ncd" Sep 9 00:09:58.760875 kubelet[2625]: E0909 00:09:58.760526 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s6ncd_calico-system(fed5e32a-e444-4618-8bda-5d30e5270140)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s6ncd_calico-system(fed5e32a-e444-4618-8bda-5d30e5270140)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6ncd" podUID="fed5e32a-e444-4618-8bda-5d30e5270140" Sep 9 00:09:58.764884 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6-shm.mount: Deactivated successfully. Sep 9 00:09:58.810124 kubelet[2625]: I0909 00:09:58.809736 2625 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:09:58.811637 containerd[1552]: time="2025-09-09T00:09:58.810295645Z" level=info msg="StopPodSandbox for \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\"" Sep 9 00:09:58.811637 containerd[1552]: time="2025-09-09T00:09:58.810674955Z" level=info msg="Ensure that sandbox a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6 in task-service has been cleanup successfully" Sep 9 00:09:58.840662 containerd[1552]: time="2025-09-09T00:09:58.840613186Z" level=error msg="StopPodSandbox for \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\" failed" error="failed to destroy network for sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:09:58.840920 kubelet[2625]: E0909 00:09:58.840814 2625 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:09:58.840993 kubelet[2625]: E0909 00:09:58.840933 2625 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6"} Sep 9 00:09:58.840993 kubelet[2625]: E0909 00:09:58.840977 2625 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fed5e32a-e444-4618-8bda-5d30e5270140\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:09:58.841064 kubelet[2625]: E0909 00:09:58.840997 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fed5e32a-e444-4618-8bda-5d30e5270140\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6ncd" podUID="fed5e32a-e444-4618-8bda-5d30e5270140" Sep 9 00:10:02.031999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2741786114.mount: Deactivated successfully. Sep 9 00:10:02.111785 kubelet[2625]: I0909 00:10:02.111744 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:10:02.112812 kubelet[2625]: E0909 00:10:02.112767 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:02.275940 containerd[1552]: time="2025-09-09T00:10:02.275595002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:02.276456 containerd[1552]: time="2025-09-09T00:10:02.276411977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 9 00:10:02.279464 containerd[1552]: time="2025-09-09T00:10:02.279412220Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:02.282750 containerd[1552]: time="2025-09-09T00:10:02.281934150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:02.282750 containerd[1552]: time="2025-09-09T00:10:02.282428463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.476682679s" Sep 9 00:10:02.282750 containerd[1552]: time="2025-09-09T00:10:02.282469106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 9 00:10:02.297600 containerd[1552]: time="2025-09-09T00:10:02.297529403Z" level=info msg="CreateContainer within sandbox \"5723abb537229b7c2f01a55edc7bf410c09519091297b0462a03bd3d806e0134\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:10:02.312848 containerd[1552]: time="2025-09-09T00:10:02.312728070Z" level=info msg="CreateContainer within sandbox \"5723abb537229b7c2f01a55edc7bf410c09519091297b0462a03bd3d806e0134\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c2084af97f9e6dc60f08a198287e75f79fcc7bb706c17ed1bca0d107103022e8\"" Sep 9 00:10:02.313415 containerd[1552]: time="2025-09-09T00:10:02.313383554Z" level=info msg="StartContainer for \"c2084af97f9e6dc60f08a198287e75f79fcc7bb706c17ed1bca0d107103022e8\"" Sep 9 00:10:02.400732 containerd[1552]: time="2025-09-09T00:10:02.400678810Z" level=info msg="StartContainer for \"c2084af97f9e6dc60f08a198287e75f79fcc7bb706c17ed1bca0d107103022e8\" returns successfully" Sep 9 00:10:02.522123 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:10:02.522219 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:10:02.827051 kubelet[2625]: E0909 00:10:02.826821 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:02.836369 containerd[1552]: time="2025-09-09T00:10:02.836334634Z" level=info msg="StopPodSandbox for \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\"" Sep 9 00:10:02.862096 kubelet[2625]: I0909 00:10:02.861782 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-47d4b" podStartSLOduration=1.492047828 podStartE2EDuration="12.861765312s" podCreationTimestamp="2025-09-09 00:09:50 +0000 UTC" firstStartedPulling="2025-09-09 00:09:50.91360104 +0000 UTC m=+20.313032702" lastFinishedPulling="2025-09-09 00:10:02.283318524 +0000 UTC m=+31.682750186" observedRunningTime="2025-09-09 00:10:02.861329842 +0000 UTC m=+32.260761464" watchObservedRunningTime="2025-09-09 00:10:02.861765312 +0000 UTC m=+32.261196974" Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:02.920 [INFO][3943] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:02.921 [INFO][3943] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" iface="eth0" netns="/var/run/netns/cni-f32f0569-532e-67b0-7329-eb4eb92106df" Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:02.923 [INFO][3943] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" iface="eth0" netns="/var/run/netns/cni-f32f0569-532e-67b0-7329-eb4eb92106df" Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:02.926 [INFO][3943] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" iface="eth0" netns="/var/run/netns/cni-f32f0569-532e-67b0-7329-eb4eb92106df" Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:02.926 [INFO][3943] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:02.926 [INFO][3943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:03.005 [INFO][3953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" HandleID="k8s-pod-network.1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Workload="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:03.005 [INFO][3953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:03.005 [INFO][3953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:03.014 [WARNING][3953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" HandleID="k8s-pod-network.1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Workload="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:03.014 [INFO][3953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" HandleID="k8s-pod-network.1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Workload="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:03.015 [INFO][3953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:03.019445 containerd[1552]: 2025-09-09 00:10:03.017 [INFO][3943] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:03.019816 containerd[1552]: time="2025-09-09T00:10:03.019607807Z" level=info msg="TearDown network for sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\" successfully" Sep 9 00:10:03.019816 containerd[1552]: time="2025-09-09T00:10:03.019673011Z" level=info msg="StopPodSandbox for \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\" returns successfully" Sep 9 00:10:03.033062 systemd[1]: run-netns-cni\x2df32f0569\x2d532e\x2d67b0\x2d7329\x2deb4eb92106df.mount: Deactivated successfully. Sep 9 00:10:03.105620 kubelet[2625]: I0909 00:10:03.105417 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h59w4\" (UniqueName: \"kubernetes.io/projected/21616835-9c34-40e5-986f-0d6c83c914be-kube-api-access-h59w4\") pod \"21616835-9c34-40e5-986f-0d6c83c914be\" (UID: \"21616835-9c34-40e5-986f-0d6c83c914be\") " Sep 9 00:10:03.105620 kubelet[2625]: I0909 00:10:03.105475 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21616835-9c34-40e5-986f-0d6c83c914be-whisker-ca-bundle\") pod \"21616835-9c34-40e5-986f-0d6c83c914be\" (UID: \"21616835-9c34-40e5-986f-0d6c83c914be\") " Sep 9 00:10:03.105620 kubelet[2625]: I0909 00:10:03.105502 2625 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21616835-9c34-40e5-986f-0d6c83c914be-whisker-backend-key-pair\") pod \"21616835-9c34-40e5-986f-0d6c83c914be\" (UID: \"21616835-9c34-40e5-986f-0d6c83c914be\") " Sep 9 00:10:03.112636 kubelet[2625]: I0909 00:10:03.112551 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21616835-9c34-40e5-986f-0d6c83c914be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "21616835-9c34-40e5-986f-0d6c83c914be" (UID: "21616835-9c34-40e5-986f-0d6c83c914be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 00:10:03.116085 kubelet[2625]: I0909 00:10:03.116042 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21616835-9c34-40e5-986f-0d6c83c914be-kube-api-access-h59w4" (OuterVolumeSpecName: "kube-api-access-h59w4") pod "21616835-9c34-40e5-986f-0d6c83c914be" (UID: "21616835-9c34-40e5-986f-0d6c83c914be"). InnerVolumeSpecName "kube-api-access-h59w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 00:10:03.118059 systemd[1]: var-lib-kubelet-pods-21616835\x2d9c34\x2d40e5\x2d986f\x2d0d6c83c914be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh59w4.mount: Deactivated successfully. Sep 9 00:10:03.121951 kubelet[2625]: I0909 00:10:03.121909 2625 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21616835-9c34-40e5-986f-0d6c83c914be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "21616835-9c34-40e5-986f-0d6c83c914be" (UID: "21616835-9c34-40e5-986f-0d6c83c914be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 00:10:03.123932 systemd[1]: var-lib-kubelet-pods-21616835\x2d9c34\x2d40e5\x2d986f\x2d0d6c83c914be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:10:03.205777 kubelet[2625]: I0909 00:10:03.205730 2625 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/21616835-9c34-40e5-986f-0d6c83c914be-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:10:03.205777 kubelet[2625]: I0909 00:10:03.205768 2625 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h59w4\" (UniqueName: \"kubernetes.io/projected/21616835-9c34-40e5-986f-0d6c83c914be-kube-api-access-h59w4\") on node \"localhost\" DevicePath \"\"" Sep 9 00:10:03.205777 kubelet[2625]: I0909 00:10:03.205779 2625 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21616835-9c34-40e5-986f-0d6c83c914be-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:10:03.828611 kubelet[2625]: I0909 00:10:03.828579 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:10:03.911062 kubelet[2625]: I0909 00:10:03.910687 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07ed0d57-679d-4ce9-842e-46e865580a9c-whisker-ca-bundle\") pod \"whisker-55c6f8b75c-sf2sb\" (UID: \"07ed0d57-679d-4ce9-842e-46e865580a9c\") " pod="calico-system/whisker-55c6f8b75c-sf2sb" Sep 9 00:10:03.911062 kubelet[2625]: I0909 00:10:03.910771 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xntrv\" (UniqueName: \"kubernetes.io/projected/07ed0d57-679d-4ce9-842e-46e865580a9c-kube-api-access-xntrv\") pod \"whisker-55c6f8b75c-sf2sb\" (UID: \"07ed0d57-679d-4ce9-842e-46e865580a9c\") " pod="calico-system/whisker-55c6f8b75c-sf2sb" Sep 9 00:10:03.911062 kubelet[2625]: I0909 00:10:03.910790 2625 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/07ed0d57-679d-4ce9-842e-46e865580a9c-whisker-backend-key-pair\") pod \"whisker-55c6f8b75c-sf2sb\" (UID: \"07ed0d57-679d-4ce9-842e-46e865580a9c\") " pod="calico-system/whisker-55c6f8b75c-sf2sb" Sep 9 00:10:04.179286 containerd[1552]: time="2025-09-09T00:10:04.179236318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55c6f8b75c-sf2sb,Uid:07ed0d57-679d-4ce9-842e-46e865580a9c,Namespace:calico-system,Attempt:0,}" Sep 9 00:10:04.188175 kernel: bpftool[4102]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 9 00:10:04.320631 systemd-networkd[1232]: cali2685755be4a: Link UP Sep 9 00:10:04.321392 systemd-networkd[1232]: cali2685755be4a: Gained carrier Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.252 [INFO][4103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0 whisker-55c6f8b75c- calico-system 07ed0d57-679d-4ce9-842e-46e865580a9c 896 0 2025-09-09 00:10:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55c6f8b75c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-55c6f8b75c-sf2sb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2685755be4a [] [] }} ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Namespace="calico-system" Pod="whisker-55c6f8b75c-sf2sb" WorkloadEndpoint="localhost-k8s-whisker--55c6f8b75c--sf2sb-" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.252 [INFO][4103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Namespace="calico-system" Pod="whisker-55c6f8b75c-sf2sb" WorkloadEndpoint="localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.274 [INFO][4119] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" HandleID="k8s-pod-network.83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Workload="localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.275 [INFO][4119] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" HandleID="k8s-pod-network.83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Workload="localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323490), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-55c6f8b75c-sf2sb", "timestamp":"2025-09-09 00:10:04.274906268 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.275 [INFO][4119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.275 [INFO][4119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.275 [INFO][4119] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.285 [INFO][4119] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" host="localhost" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.292 [INFO][4119] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.296 [INFO][4119] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.297 [INFO][4119] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.299 [INFO][4119] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.299 [INFO][4119] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" host="localhost" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.301 [INFO][4119] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44 Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.304 [INFO][4119] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" host="localhost" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.310 [INFO][4119] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" host="localhost" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.310 [INFO][4119] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" host="localhost" Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.310 [INFO][4119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:04.335337 containerd[1552]: 2025-09-09 00:10:04.311 [INFO][4119] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" HandleID="k8s-pod-network.83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Workload="localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0" Sep 9 00:10:04.335879 containerd[1552]: 2025-09-09 00:10:04.313 [INFO][4103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Namespace="calico-system" Pod="whisker-55c6f8b75c-sf2sb" WorkloadEndpoint="localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0", GenerateName:"whisker-55c6f8b75c-", Namespace:"calico-system", SelfLink:"", UID:"07ed0d57-679d-4ce9-842e-46e865580a9c", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55c6f8b75c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-55c6f8b75c-sf2sb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2685755be4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:04.335879 containerd[1552]: 2025-09-09 00:10:04.313 [INFO][4103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Namespace="calico-system" Pod="whisker-55c6f8b75c-sf2sb" WorkloadEndpoint="localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0" Sep 9 00:10:04.335879 containerd[1552]: 2025-09-09 00:10:04.313 [INFO][4103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2685755be4a ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Namespace="calico-system" Pod="whisker-55c6f8b75c-sf2sb" WorkloadEndpoint="localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0" Sep 9 00:10:04.335879 containerd[1552]: 2025-09-09 00:10:04.321 [INFO][4103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Namespace="calico-system" Pod="whisker-55c6f8b75c-sf2sb" WorkloadEndpoint="localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0" Sep 9 00:10:04.335879 containerd[1552]: 2025-09-09 00:10:04.321 [INFO][4103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Namespace="calico-system" Pod="whisker-55c6f8b75c-sf2sb" WorkloadEndpoint="localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0", GenerateName:"whisker-55c6f8b75c-", Namespace:"calico-system", SelfLink:"", UID:"07ed0d57-679d-4ce9-842e-46e865580a9c", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 10, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55c6f8b75c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44", Pod:"whisker-55c6f8b75c-sf2sb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2685755be4a", MAC:"56:e5:3c:cf:4b:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:04.335879 containerd[1552]: 2025-09-09 00:10:04.330 [INFO][4103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44" Namespace="calico-system" Pod="whisker-55c6f8b75c-sf2sb" WorkloadEndpoint="localhost-k8s-whisker--55c6f8b75c--sf2sb-eth0" Sep 9 00:10:04.357360 containerd[1552]: time="2025-09-09T00:10:04.356325600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:10:04.357360 containerd[1552]: time="2025-09-09T00:10:04.356397084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:10:04.357360 containerd[1552]: time="2025-09-09T00:10:04.356411925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:04.357360 containerd[1552]: time="2025-09-09T00:10:04.356515652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:04.381867 systemd-networkd[1232]: vxlan.calico: Link UP Sep 9 00:10:04.381873 systemd-networkd[1232]: vxlan.calico: Gained carrier Sep 9 00:10:04.383837 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:10:04.408041 containerd[1552]: time="2025-09-09T00:10:04.408004377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55c6f8b75c-sf2sb,Uid:07ed0d57-679d-4ce9-842e-46e865580a9c,Namespace:calico-system,Attempt:0,} returns sandbox id \"83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44\"" Sep 9 00:10:04.414499 containerd[1552]: time="2025-09-09T00:10:04.414319255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:10:04.691636 kubelet[2625]: I0909 00:10:04.691601 2625 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21616835-9c34-40e5-986f-0d6c83c914be" path="/var/lib/kubelet/pods/21616835-9c34-40e5-986f-0d6c83c914be/volumes" Sep 9 00:10:05.855593 containerd[1552]: time="2025-09-09T00:10:05.855552503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:05.856308 containerd[1552]: time="2025-09-09T00:10:05.856284708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 9 00:10:05.857026 containerd[1552]: time="2025-09-09T00:10:05.856995991Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:05.859202 containerd[1552]: time="2025-09-09T00:10:05.859154483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:05.860142 containerd[1552]: time="2025-09-09T00:10:05.860018176Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.445659918s" Sep 9 00:10:05.860142 containerd[1552]: time="2025-09-09T00:10:05.860048417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 9 00:10:05.863362 containerd[1552]: time="2025-09-09T00:10:05.863329578Z" level=info msg="CreateContainer within sandbox \"83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:10:05.875387 containerd[1552]: time="2025-09-09T00:10:05.875350671Z" level=info msg="CreateContainer within sandbox \"83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"322d59b04201d6d7832c4a41800addb73106fd4fa11aaed205cadf671995eac6\"" Sep 9 00:10:05.876012 containerd[1552]: time="2025-09-09T00:10:05.875987709Z" level=info msg="StartContainer for \"322d59b04201d6d7832c4a41800addb73106fd4fa11aaed205cadf671995eac6\"" Sep 9 00:10:05.940303 containerd[1552]: time="2025-09-09T00:10:05.940257989Z" level=info msg="StartContainer for \"322d59b04201d6d7832c4a41800addb73106fd4fa11aaed205cadf671995eac6\" returns successfully" Sep 9 00:10:05.941842 containerd[1552]: time="2025-09-09T00:10:05.941805643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:10:06.263327 systemd-networkd[1232]: cali2685755be4a: Gained IPv6LL Sep 9 00:10:06.327404 systemd-networkd[1232]: vxlan.calico: Gained IPv6LL Sep 9 00:10:07.443400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount63830599.mount: Deactivated successfully. Sep 9 00:10:07.457391 containerd[1552]: time="2025-09-09T00:10:07.457348325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:07.457808 containerd[1552]: time="2025-09-09T00:10:07.457777910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 9 00:10:07.458652 containerd[1552]: time="2025-09-09T00:10:07.458621598Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:07.461192 containerd[1552]: time="2025-09-09T00:10:07.461161303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:07.462045 containerd[1552]: time="2025-09-09T00:10:07.462013232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.520174828s" Sep 9 00:10:07.462081 containerd[1552]: time="2025-09-09T00:10:07.462045434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 9 00:10:07.464562 containerd[1552]: time="2025-09-09T00:10:07.464487614Z" level=info msg="CreateContainer within sandbox \"83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:10:07.471857 containerd[1552]: time="2025-09-09T00:10:07.471825034Z" level=info msg="CreateContainer within sandbox \"83a0ffe0c46b0a319205754ab1e39c65b0a0e233621723ceeb92f90c8221bd44\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b195318a13586c21304d2174cfe8269aabbba48d66eadcc1d595b3c3f9e5d14d\"" Sep 9 00:10:07.473157 containerd[1552]: time="2025-09-09T00:10:07.472557956Z" level=info msg="StartContainer for \"b195318a13586c21304d2174cfe8269aabbba48d66eadcc1d595b3c3f9e5d14d\"" Sep 9 00:10:07.539120 containerd[1552]: time="2025-09-09T00:10:07.537856895Z" level=info msg="StartContainer for \"b195318a13586c21304d2174cfe8269aabbba48d66eadcc1d595b3c3f9e5d14d\" returns successfully" Sep 9 00:10:07.852704 kubelet[2625]: I0909 00:10:07.851131 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-55c6f8b75c-sf2sb" podStartSLOduration=1.80196353 podStartE2EDuration="4.851095192s" podCreationTimestamp="2025-09-09 00:10:03 +0000 UTC" firstStartedPulling="2025-09-09 00:10:04.413857386 +0000 UTC m=+33.813289048" lastFinishedPulling="2025-09-09 00:10:07.462989048 +0000 UTC m=+36.862420710" observedRunningTime="2025-09-09 00:10:07.850042492 +0000 UTC m=+37.249474194" watchObservedRunningTime="2025-09-09 00:10:07.851095192 +0000 UTC m=+37.250526854" Sep 9 00:10:08.332351 kubelet[2625]: I0909 00:10:08.332285 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:10:08.691455 containerd[1552]: time="2025-09-09T00:10:08.691409746Z" level=info msg="StopPodSandbox for \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\"" Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.763 [INFO][4400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.763 [INFO][4400] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" iface="eth0" netns="/var/run/netns/cni-615c44d7-1565-18e8-3f5f-70103559ca16" Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.763 [INFO][4400] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" iface="eth0" netns="/var/run/netns/cni-615c44d7-1565-18e8-3f5f-70103559ca16" Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.773 [INFO][4400] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" iface="eth0" netns="/var/run/netns/cni-615c44d7-1565-18e8-3f5f-70103559ca16" Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.773 [INFO][4400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.773 [INFO][4400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.810 [INFO][4408] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" HandleID="k8s-pod-network.670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.810 [INFO][4408] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.810 [INFO][4408] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.818 [WARNING][4408] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" HandleID="k8s-pod-network.670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.818 [INFO][4408] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" HandleID="k8s-pod-network.670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.821 [INFO][4408] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:08.824142 containerd[1552]: 2025-09-09 00:10:08.822 [INFO][4400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:08.824608 containerd[1552]: time="2025-09-09T00:10:08.824260609Z" level=info msg="TearDown network for sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\" successfully" Sep 9 00:10:08.824608 containerd[1552]: time="2025-09-09T00:10:08.824286290Z" level=info msg="StopPodSandbox for \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\" returns successfully" Sep 9 00:10:08.826358 containerd[1552]: time="2025-09-09T00:10:08.826308163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d475789b5-phkbj,Uid:bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:10:08.827431 systemd[1]: run-netns-cni\x2d615c44d7\x2d1565\x2d18e8\x2d3f5f\x2d70103559ca16.mount: Deactivated successfully. Sep 9 00:10:08.937892 systemd-networkd[1232]: calia1c1a030a72: Link UP Sep 9 00:10:08.938310 systemd-networkd[1232]: calia1c1a030a72: Gained carrier Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.871 [INFO][4416] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0 calico-apiserver-d475789b5- calico-apiserver bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b 928 0 2025-09-09 00:09:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d475789b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d475789b5-phkbj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia1c1a030a72 [] [] }} ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-phkbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--phkbj-" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.872 [INFO][4416] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-phkbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.895 [INFO][4431] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" HandleID="k8s-pod-network.032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.895 [INFO][4431] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" HandleID="k8s-pod-network.032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d475789b5-phkbj", "timestamp":"2025-09-09 00:10:08.895742142 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.895 [INFO][4431] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.895 [INFO][4431] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.895 [INFO][4431] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.905 [INFO][4431] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" host="localhost" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.910 [INFO][4431] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.914 [INFO][4431] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.916 [INFO][4431] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.918 [INFO][4431] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.919 [INFO][4431] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" host="localhost" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.920 [INFO][4431] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2 Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.925 [INFO][4431] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" host="localhost" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.932 [INFO][4431] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" host="localhost" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.932 [INFO][4431] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" host="localhost" Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.932 [INFO][4431] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:08.952585 containerd[1552]: 2025-09-09 00:10:08.932 [INFO][4431] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" HandleID="k8s-pod-network.032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:08.953110 containerd[1552]: 2025-09-09 00:10:08.934 [INFO][4416] cni-plugin/k8s.go 418: Populated endpoint ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-phkbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0", GenerateName:"calico-apiserver-d475789b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d475789b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d475789b5-phkbj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1c1a030a72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:08.953110 containerd[1552]: 2025-09-09 00:10:08.935 [INFO][4416] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-phkbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:08.953110 containerd[1552]: 2025-09-09 00:10:08.935 [INFO][4416] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1c1a030a72 ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-phkbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:08.953110 containerd[1552]: 2025-09-09 00:10:08.939 [INFO][4416] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-phkbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:08.953110 containerd[1552]: 2025-09-09 00:10:08.939 [INFO][4416] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-phkbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0", GenerateName:"calico-apiserver-d475789b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d475789b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2", Pod:"calico-apiserver-d475789b5-phkbj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1c1a030a72", MAC:"1e:07:ad:b5:1f:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:08.953110 containerd[1552]: 2025-09-09 00:10:08.949 [INFO][4416] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-phkbj" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:08.967800 containerd[1552]: time="2025-09-09T00:10:08.967578134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:10:08.967800 containerd[1552]: time="2025-09-09T00:10:08.967635177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:10:08.967800 containerd[1552]: time="2025-09-09T00:10:08.967646938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:08.967800 containerd[1552]: time="2025-09-09T00:10:08.967729422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:08.994589 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:10:09.013632 containerd[1552]: time="2025-09-09T00:10:09.013597072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d475789b5-phkbj,Uid:bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2\"" Sep 9 00:10:09.019297 containerd[1552]: time="2025-09-09T00:10:09.019266098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:10:09.689963 containerd[1552]: time="2025-09-09T00:10:09.689923710Z" level=info msg="StopPodSandbox for \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\"" Sep 9 00:10:09.692202 containerd[1552]: time="2025-09-09T00:10:09.692115348Z" level=info msg="StopPodSandbox for \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\"" Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.761 [INFO][4519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.761 [INFO][4519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" iface="eth0" netns="/var/run/netns/cni-03a28ef9-69d0-1582-ffe5-c61e0d403cd6" Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.761 [INFO][4519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" iface="eth0" netns="/var/run/netns/cni-03a28ef9-69d0-1582-ffe5-c61e0d403cd6" Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.761 [INFO][4519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" iface="eth0" netns="/var/run/netns/cni-03a28ef9-69d0-1582-ffe5-c61e0d403cd6" Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.761 [INFO][4519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.761 [INFO][4519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.781 [INFO][4541] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" HandleID="k8s-pod-network.00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.781 [INFO][4541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.782 [INFO][4541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.790 [WARNING][4541] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" HandleID="k8s-pod-network.00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.790 [INFO][4541] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" HandleID="k8s-pod-network.00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.791 [INFO][4541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:09.797873 containerd[1552]: 2025-09-09 00:10:09.796 [INFO][4519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:09.799741 containerd[1552]: time="2025-09-09T00:10:09.798762066Z" level=info msg="TearDown network for sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\" successfully" Sep 9 00:10:09.799741 containerd[1552]: time="2025-09-09T00:10:09.798794908Z" level=info msg="StopPodSandbox for \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\" returns successfully" Sep 9 00:10:09.799786 kubelet[2625]: E0909 00:10:09.799206 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:09.801365 containerd[1552]: time="2025-09-09T00:10:09.800302749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8wxhk,Uid:571f56a2-cb8d-4d68-9002-2ce4d310ff3f,Namespace:kube-system,Attempt:1,}" Sep 9 00:10:09.802317 systemd[1]: run-netns-cni\x2d03a28ef9\x2d69d0\x2d1582\x2dffe5\x2dc61e0d403cd6.mount: Deactivated successfully. Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.757 [INFO][4520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.757 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" iface="eth0" netns="/var/run/netns/cni-24df5645-7826-3193-0dc8-57165aaf3c9e" Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.758 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" iface="eth0" netns="/var/run/netns/cni-24df5645-7826-3193-0dc8-57165aaf3c9e" Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.758 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" iface="eth0" netns="/var/run/netns/cni-24df5645-7826-3193-0dc8-57165aaf3c9e" Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.758 [INFO][4520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.758 [INFO][4520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.782 [INFO][4536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" HandleID="k8s-pod-network.95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.783 [INFO][4536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.791 [INFO][4536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.802 [WARNING][4536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" HandleID="k8s-pod-network.95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.802 [INFO][4536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" HandleID="k8s-pod-network.95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.805 [INFO][4536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:09.810185 containerd[1552]: 2025-09-09 00:10:09.807 [INFO][4520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:09.812356 containerd[1552]: time="2025-09-09T00:10:09.812323998Z" level=info msg="TearDown network for sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\" successfully" Sep 9 00:10:09.812356 containerd[1552]: time="2025-09-09T00:10:09.812353240Z" level=info msg="StopPodSandbox for \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\" returns successfully" Sep 9 00:10:09.813058 containerd[1552]: time="2025-09-09T00:10:09.813028716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d475789b5-j7mc9,Uid:9bf6427e-824b-4ca1-8c25-8463e096ff46,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:10:09.814090 systemd[1]: run-netns-cni\x2d24df5645\x2d7826\x2d3193\x2d0dc8\x2d57165aaf3c9e.mount: Deactivated successfully. Sep 9 00:10:09.933191 systemd-networkd[1232]: cali8331e67938a: Link UP Sep 9 00:10:09.933382 systemd-networkd[1232]: cali8331e67938a: Gained carrier Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.862 [INFO][4559] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0 calico-apiserver-d475789b5- calico-apiserver 9bf6427e-824b-4ca1-8c25-8463e096ff46 937 0 2025-09-09 00:09:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d475789b5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d475789b5-j7mc9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8331e67938a [] [] }} ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-j7mc9" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--j7mc9-" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.862 [INFO][4559] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-j7mc9" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.890 [INFO][4581] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" HandleID="k8s-pod-network.a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.890 [INFO][4581] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" HandleID="k8s-pod-network.a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001365c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d475789b5-j7mc9", "timestamp":"2025-09-09 00:10:09.890643187 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.890 [INFO][4581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.890 [INFO][4581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.890 [INFO][4581] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.901 [INFO][4581] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" host="localhost" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.905 [INFO][4581] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.910 [INFO][4581] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.912 [INFO][4581] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.914 [INFO][4581] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.914 [INFO][4581] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" host="localhost" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.915 [INFO][4581] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737 Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.922 [INFO][4581] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" host="localhost" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.927 [INFO][4581] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" host="localhost" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.927 [INFO][4581] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" host="localhost" Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.927 [INFO][4581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:09.949233 containerd[1552]: 2025-09-09 00:10:09.927 [INFO][4581] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" HandleID="k8s-pod-network.a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:09.949778 containerd[1552]: 2025-09-09 00:10:09.930 [INFO][4559] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-j7mc9" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0", GenerateName:"calico-apiserver-d475789b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bf6427e-824b-4ca1-8c25-8463e096ff46", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d475789b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d475789b5-j7mc9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8331e67938a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:09.949778 containerd[1552]: 2025-09-09 00:10:09.931 [INFO][4559] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-j7mc9" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:09.949778 containerd[1552]: 2025-09-09 00:10:09.931 [INFO][4559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8331e67938a ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-j7mc9" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:09.949778 containerd[1552]: 2025-09-09 00:10:09.933 [INFO][4559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-j7mc9" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:09.949778 containerd[1552]: 2025-09-09 00:10:09.933 [INFO][4559] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-j7mc9" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0", GenerateName:"calico-apiserver-d475789b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bf6427e-824b-4ca1-8c25-8463e096ff46", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d475789b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737", Pod:"calico-apiserver-d475789b5-j7mc9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8331e67938a", MAC:"a6:51:26:af:3c:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:09.949778 containerd[1552]: 2025-09-09 00:10:09.947 [INFO][4559] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737" Namespace="calico-apiserver" Pod="calico-apiserver-d475789b5-j7mc9" WorkloadEndpoint="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:09.965687 containerd[1552]: time="2025-09-09T00:10:09.965473828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:10:09.965687 containerd[1552]: time="2025-09-09T00:10:09.965530991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:10:09.965687 containerd[1552]: time="2025-09-09T00:10:09.965541831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:09.965687 containerd[1552]: time="2025-09-09T00:10:09.965621996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:09.997612 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:10:10.025969 containerd[1552]: time="2025-09-09T00:10:10.025852691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d475789b5-j7mc9,Uid:9bf6427e-824b-4ca1-8c25-8463e096ff46,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737\"" Sep 9 00:10:10.041740 systemd-networkd[1232]: cali4a3b6a5c171: Link UP Sep 9 00:10:10.042393 systemd-networkd[1232]: cali4a3b6a5c171: Gained carrier Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:09.868 [INFO][4554] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0 coredns-7c65d6cfc9- kube-system 571f56a2-cb8d-4d68-9002-2ce4d310ff3f 938 0 2025-09-09 00:09:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-8wxhk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4a3b6a5c171 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8wxhk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8wxhk-" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:09.869 [INFO][4554] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8wxhk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:09.893 [INFO][4587] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" HandleID="k8s-pod-network.65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:09.893 [INFO][4587] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" HandleID="k8s-pod-network.65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd050), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-8wxhk", "timestamp":"2025-09-09 00:10:09.893730634 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:09.893 [INFO][4587] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:09.927 [INFO][4587] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:09.927 [INFO][4587] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.003 [INFO][4587] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" host="localhost" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.011 [INFO][4587] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.016 [INFO][4587] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.018 [INFO][4587] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.022 [INFO][4587] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.022 [INFO][4587] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" host="localhost" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.023 [INFO][4587] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9 Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.027 [INFO][4587] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" host="localhost" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.035 [INFO][4587] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" host="localhost" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.035 [INFO][4587] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" host="localhost" Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.035 [INFO][4587] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:10.058459 containerd[1552]: 2025-09-09 00:10:10.035 [INFO][4587] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" HandleID="k8s-pod-network.65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:10.059031 containerd[1552]: 2025-09-09 00:10:10.038 [INFO][4554] cni-plugin/k8s.go 418: Populated endpoint ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8wxhk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"571f56a2-cb8d-4d68-9002-2ce4d310ff3f", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-8wxhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a3b6a5c171", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:10.059031 containerd[1552]: 2025-09-09 00:10:10.038 [INFO][4554] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8wxhk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:10.059031 containerd[1552]: 2025-09-09 00:10:10.038 [INFO][4554] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a3b6a5c171 ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8wxhk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:10.059031 containerd[1552]: 2025-09-09 00:10:10.041 [INFO][4554] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8wxhk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:10.059031 containerd[1552]: 2025-09-09 00:10:10.042 [INFO][4554] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8wxhk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"571f56a2-cb8d-4d68-9002-2ce4d310ff3f", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9", Pod:"coredns-7c65d6cfc9-8wxhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a3b6a5c171", MAC:"6a:e2:8d:3f:a9:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:10.059031 containerd[1552]: 2025-09-09 00:10:10.055 [INFO][4554] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8wxhk" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:10.074634 containerd[1552]: time="2025-09-09T00:10:10.074236112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:10:10.074634 containerd[1552]: time="2025-09-09T00:10:10.074605411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:10:10.074634 containerd[1552]: time="2025-09-09T00:10:10.074616652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:10.074831 containerd[1552]: time="2025-09-09T00:10:10.074697856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:10.097852 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:10:10.114849 containerd[1552]: time="2025-09-09T00:10:10.114815523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8wxhk,Uid:571f56a2-cb8d-4d68-9002-2ce4d310ff3f,Namespace:kube-system,Attempt:1,} returns sandbox id \"65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9\"" Sep 9 00:10:10.115567 kubelet[2625]: E0909 00:10:10.115529 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:10.117957 containerd[1552]: time="2025-09-09T00:10:10.117920406Z" level=info msg="CreateContainer within sandbox \"65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:10:10.145713 containerd[1552]: time="2025-09-09T00:10:10.145676703Z" level=info msg="CreateContainer within sandbox \"65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cb1bf5be0f9f8dea5a9da203ba04835fdf7d3cca0f24422ac42a1b3b55684b5d\"" Sep 9 00:10:10.146344 containerd[1552]: time="2025-09-09T00:10:10.146321737Z" level=info msg="StartContainer for \"cb1bf5be0f9f8dea5a9da203ba04835fdf7d3cca0f24422ac42a1b3b55684b5d\"" Sep 9 00:10:10.200511 containerd[1552]: time="2025-09-09T00:10:10.200398337Z" level=info msg="StartContainer for \"cb1bf5be0f9f8dea5a9da203ba04835fdf7d3cca0f24422ac42a1b3b55684b5d\" returns successfully" Sep 9 00:10:10.692019 containerd[1552]: time="2025-09-09T00:10:10.690391667Z" level=info msg="StopPodSandbox for \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\"" Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.748 [INFO][4749] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.748 [INFO][4749] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" iface="eth0" netns="/var/run/netns/cni-8b76d554-0dbc-1d20-7879-a473261cb390" Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.748 [INFO][4749] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" iface="eth0" netns="/var/run/netns/cni-8b76d554-0dbc-1d20-7879-a473261cb390" Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.749 [INFO][4749] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" iface="eth0" netns="/var/run/netns/cni-8b76d554-0dbc-1d20-7879-a473261cb390" Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.749 [INFO][4749] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.749 [INFO][4749] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.767 [INFO][4759] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" HandleID="k8s-pod-network.f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.767 [INFO][4759] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.767 [INFO][4759] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.775 [WARNING][4759] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" HandleID="k8s-pod-network.f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.776 [INFO][4759] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" HandleID="k8s-pod-network.f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.777 [INFO][4759] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:10.780860 containerd[1552]: 2025-09-09 00:10:10.779 [INFO][4749] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:10.785044 containerd[1552]: time="2025-09-09T00:10:10.784243115Z" level=info msg="TearDown network for sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\" successfully" Sep 9 00:10:10.785044 containerd[1552]: time="2025-09-09T00:10:10.784274477Z" level=info msg="StopPodSandbox for \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\" returns successfully" Sep 9 00:10:10.783322 systemd[1]: run-netns-cni\x2d8b76d554\x2d0dbc\x2d1d20\x2d7879\x2da473261cb390.mount: Deactivated successfully. Sep 9 00:10:10.785696 containerd[1552]: time="2025-09-09T00:10:10.785661830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d49f97799-ghr5z,Uid:a80d3a72-2041-413d-bbb6-13849b69be7e,Namespace:calico-system,Attempt:1,}" Sep 9 00:10:10.857796 kubelet[2625]: E0909 00:10:10.857738 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:10.873675 systemd-networkd[1232]: calia1c1a030a72: Gained IPv6LL Sep 9 00:10:10.897054 kubelet[2625]: I0909 00:10:10.896885 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8wxhk" podStartSLOduration=32.89686723 podStartE2EDuration="32.89686723s" podCreationTimestamp="2025-09-09 00:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:10:10.874441572 +0000 UTC m=+40.273873274" watchObservedRunningTime="2025-09-09 00:10:10.89686723 +0000 UTC m=+40.296298892" Sep 9 00:10:10.950121 systemd-networkd[1232]: calib2cc3e9375a: Link UP Sep 9 00:10:10.950501 systemd-networkd[1232]: calib2cc3e9375a: Gained carrier Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.833 [INFO][4767] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0 calico-kube-controllers-6d49f97799- calico-system a80d3a72-2041-413d-bbb6-13849b69be7e 954 0 2025-09-09 00:09:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d49f97799 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6d49f97799-ghr5z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib2cc3e9375a [] [] }} ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Namespace="calico-system" Pod="calico-kube-controllers-6d49f97799-ghr5z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.833 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Namespace="calico-system" Pod="calico-kube-controllers-6d49f97799-ghr5z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.899 [INFO][4782] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" HandleID="k8s-pod-network.cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.899 [INFO][4782] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" HandleID="k8s-pod-network.cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003226b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6d49f97799-ghr5z", "timestamp":"2025-09-09 00:10:10.89935896 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.899 [INFO][4782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.899 [INFO][4782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.899 [INFO][4782] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.911 [INFO][4782] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" host="localhost" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.919 [INFO][4782] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.923 [INFO][4782] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.926 [INFO][4782] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.928 [INFO][4782] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.928 [INFO][4782] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" host="localhost" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.931 [INFO][4782] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135 Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.936 [INFO][4782] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" host="localhost" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.943 [INFO][4782] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" host="localhost" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.943 [INFO][4782] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" host="localhost" Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.943 [INFO][4782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:10.967631 containerd[1552]: 2025-09-09 00:10:10.943 [INFO][4782] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" HandleID="k8s-pod-network.cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:10.968193 containerd[1552]: 2025-09-09 00:10:10.948 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Namespace="calico-system" Pod="calico-kube-controllers-6d49f97799-ghr5z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0", GenerateName:"calico-kube-controllers-6d49f97799-", Namespace:"calico-system", SelfLink:"", UID:"a80d3a72-2041-413d-bbb6-13849b69be7e", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d49f97799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6d49f97799-ghr5z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2cc3e9375a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:10.968193 containerd[1552]: 2025-09-09 00:10:10.948 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Namespace="calico-system" Pod="calico-kube-controllers-6d49f97799-ghr5z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:10.968193 containerd[1552]: 2025-09-09 00:10:10.948 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2cc3e9375a ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Namespace="calico-system" Pod="calico-kube-controllers-6d49f97799-ghr5z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:10.968193 containerd[1552]: 2025-09-09 00:10:10.950 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Namespace="calico-system" Pod="calico-kube-controllers-6d49f97799-ghr5z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:10.968193 containerd[1552]: 2025-09-09 00:10:10.951 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Namespace="calico-system" Pod="calico-kube-controllers-6d49f97799-ghr5z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0", GenerateName:"calico-kube-controllers-6d49f97799-", Namespace:"calico-system", SelfLink:"", UID:"a80d3a72-2041-413d-bbb6-13849b69be7e", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d49f97799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135", Pod:"calico-kube-controllers-6d49f97799-ghr5z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2cc3e9375a", MAC:"2a:c2:b6:b2:de:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:10.968193 containerd[1552]: 2025-09-09 00:10:10.964 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135" Namespace="calico-system" Pod="calico-kube-controllers-6d49f97799-ghr5z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:10.989709 containerd[1552]: time="2025-09-09T00:10:10.989623620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:10:10.989709 containerd[1552]: time="2025-09-09T00:10:10.989668343Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:10:10.989709 containerd[1552]: time="2025-09-09T00:10:10.989679543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:10.989709 containerd[1552]: time="2025-09-09T00:10:10.989750627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:11.014904 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:10:11.044161 containerd[1552]: time="2025-09-09T00:10:11.044055019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d49f97799-ghr5z,Uid:a80d3a72-2041-413d-bbb6-13849b69be7e,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135\"" Sep 9 00:10:11.127302 systemd-networkd[1232]: cali4a3b6a5c171: Gained IPv6LL Sep 9 00:10:11.193850 systemd-networkd[1232]: cali8331e67938a: Gained IPv6LL Sep 9 00:10:11.689793 containerd[1552]: time="2025-09-09T00:10:11.689740508Z" level=info msg="StopPodSandbox for \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\"" Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.747 [INFO][4857] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.747 [INFO][4857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" iface="eth0" netns="/var/run/netns/cni-b3a59cee-e11d-15d8-a40b-491dac830d5e" Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.748 [INFO][4857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" iface="eth0" netns="/var/run/netns/cni-b3a59cee-e11d-15d8-a40b-491dac830d5e" Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.748 [INFO][4857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" iface="eth0" netns="/var/run/netns/cni-b3a59cee-e11d-15d8-a40b-491dac830d5e" Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.748 [INFO][4857] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.748 [INFO][4857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.768 [INFO][4865] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" HandleID="k8s-pod-network.ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.768 [INFO][4865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.768 [INFO][4865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.781 [WARNING][4865] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" HandleID="k8s-pod-network.ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.781 [INFO][4865] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" HandleID="k8s-pod-network.ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.782 [INFO][4865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:11.790305 containerd[1552]: 2025-09-09 00:10:11.787 [INFO][4857] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:11.791508 containerd[1552]: time="2025-09-09T00:10:11.791270298Z" level=info msg="TearDown network for sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\" successfully" Sep 9 00:10:11.791508 containerd[1552]: time="2025-09-09T00:10:11.791304700Z" level=info msg="StopPodSandbox for \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\" returns successfully" Sep 9 00:10:11.791990 containerd[1552]: time="2025-09-09T00:10:11.791951133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bdppx,Uid:bad4beb6-5789-44ba-9fa9-719ce9756876,Namespace:calico-system,Attempt:1,}" Sep 9 00:10:11.793255 systemd[1]: run-netns-cni\x2db3a59cee\x2de11d\x2d15d8\x2da40b\x2d491dac830d5e.mount: Deactivated successfully. Sep 9 00:10:11.866849 kubelet[2625]: E0909 00:10:11.866568 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:11.930550 systemd-networkd[1232]: caliad094409d1e: Link UP Sep 9 00:10:11.931484 systemd-networkd[1232]: caliad094409d1e: Gained carrier Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.850 [INFO][4872] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--bdppx-eth0 goldmane-7988f88666- calico-system bad4beb6-5789-44ba-9fa9-719ce9756876 972 0 2025-09-09 00:09:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-bdppx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliad094409d1e [] [] }} ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Namespace="calico-system" Pod="goldmane-7988f88666-bdppx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--bdppx-" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.850 [INFO][4872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Namespace="calico-system" Pod="goldmane-7988f88666-bdppx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.884 [INFO][4886] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" HandleID="k8s-pod-network.edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.884 [INFO][4886] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" HandleID="k8s-pod-network.edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-bdppx", "timestamp":"2025-09-09 00:10:11.88481788 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.885 [INFO][4886] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.885 [INFO][4886] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.885 [INFO][4886] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.893 [INFO][4886] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" host="localhost" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.900 [INFO][4886] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.906 [INFO][4886] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.908 [INFO][4886] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.910 [INFO][4886] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.910 [INFO][4886] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" host="localhost" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.911 [INFO][4886] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54 Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.915 [INFO][4886] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" host="localhost" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.922 [INFO][4886] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" host="localhost" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.922 [INFO][4886] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" host="localhost" Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.922 [INFO][4886] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:11.949401 containerd[1552]: 2025-09-09 00:10:11.922 [INFO][4886] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" HandleID="k8s-pod-network.edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:11.949867 containerd[1552]: 2025-09-09 00:10:11.924 [INFO][4872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Namespace="calico-system" Pod="goldmane-7988f88666-bdppx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--bdppx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--bdppx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"bad4beb6-5789-44ba-9fa9-719ce9756876", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-bdppx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliad094409d1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:11.949867 containerd[1552]: 2025-09-09 00:10:11.924 [INFO][4872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Namespace="calico-system" Pod="goldmane-7988f88666-bdppx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:11.949867 containerd[1552]: 2025-09-09 00:10:11.924 [INFO][4872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad094409d1e ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Namespace="calico-system" Pod="goldmane-7988f88666-bdppx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:11.949867 containerd[1552]: 2025-09-09 00:10:11.932 [INFO][4872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Namespace="calico-system" Pod="goldmane-7988f88666-bdppx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:11.949867 containerd[1552]: 2025-09-09 00:10:11.933 [INFO][4872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Namespace="calico-system" Pod="goldmane-7988f88666-bdppx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--bdppx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--bdppx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"bad4beb6-5789-44ba-9fa9-719ce9756876", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54", Pod:"goldmane-7988f88666-bdppx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliad094409d1e", MAC:"ba:d8:0c:16:7d:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:11.949867 containerd[1552]: 2025-09-09 00:10:11.941 [INFO][4872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54" Namespace="calico-system" Pod="goldmane-7988f88666-bdppx" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:11.976889 containerd[1552]: time="2025-09-09T00:10:11.974886205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:10:11.976889 containerd[1552]: time="2025-09-09T00:10:11.974948408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:10:11.976889 containerd[1552]: time="2025-09-09T00:10:11.975022052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:11.976889 containerd[1552]: time="2025-09-09T00:10:11.975123097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:12.007682 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:10:12.041327 containerd[1552]: time="2025-09-09T00:10:12.041290067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-bdppx,Uid:bad4beb6-5789-44ba-9fa9-719ce9756876,Namespace:calico-system,Attempt:1,} returns sandbox id \"edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54\"" Sep 9 00:10:12.190444 systemd[1]: run-containerd-runc-k8s.io-edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54-runc.H8hWLZ.mount: Deactivated successfully. Sep 9 00:10:12.663282 systemd-networkd[1232]: calib2cc3e9375a: Gained IPv6LL Sep 9 00:10:12.689621 containerd[1552]: time="2025-09-09T00:10:12.689420677Z" level=info msg="StopPodSandbox for \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\"" Sep 9 00:10:12.689988 containerd[1552]: time="2025-09-09T00:10:12.689949303Z" level=info msg="StopPodSandbox for \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\"" Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.743 [INFO][4961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.744 [INFO][4961] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" iface="eth0" netns="/var/run/netns/cni-e4750239-6640-4fb6-ec8d-017fd40012cc" Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.744 [INFO][4961] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" iface="eth0" netns="/var/run/netns/cni-e4750239-6640-4fb6-ec8d-017fd40012cc" Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.744 [INFO][4961] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" iface="eth0" netns="/var/run/netns/cni-e4750239-6640-4fb6-ec8d-017fd40012cc" Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.744 [INFO][4961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.744 [INFO][4961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.764 [INFO][4982] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" HandleID="k8s-pod-network.cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.764 [INFO][4982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.764 [INFO][4982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.774 [WARNING][4982] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" HandleID="k8s-pod-network.cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.774 [INFO][4982] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" HandleID="k8s-pod-network.cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.775 [INFO][4982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:12.786145 containerd[1552]: 2025-09-09 00:10:12.781 [INFO][4961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.741 [INFO][4971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.741 [INFO][4971] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" iface="eth0" netns="/var/run/netns/cni-036b0fb7-882f-49ce-4dc3-8a338f7a3fa6" Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.741 [INFO][4971] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" iface="eth0" netns="/var/run/netns/cni-036b0fb7-882f-49ce-4dc3-8a338f7a3fa6" Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.744 [INFO][4971] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" iface="eth0" netns="/var/run/netns/cni-036b0fb7-882f-49ce-4dc3-8a338f7a3fa6" Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.744 [INFO][4971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.744 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.764 [INFO][4984] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" HandleID="k8s-pod-network.a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.764 [INFO][4984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.775 [INFO][4984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.784 [WARNING][4984] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" HandleID="k8s-pod-network.a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.784 [INFO][4984] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" HandleID="k8s-pod-network.a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.785 [INFO][4984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:12.791148 containerd[1552]: 2025-09-09 00:10:12.787 [INFO][4971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:12.791148 containerd[1552]: time="2025-09-09T00:10:12.790294942Z" level=info msg="TearDown network for sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\" successfully" Sep 9 00:10:12.791148 containerd[1552]: time="2025-09-09T00:10:12.790326064Z" level=info msg="StopPodSandbox for \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\" returns successfully" Sep 9 00:10:12.795132 containerd[1552]: time="2025-09-09T00:10:12.793165885Z" level=info msg="TearDown network for sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\" successfully" Sep 9 00:10:12.795132 containerd[1552]: time="2025-09-09T00:10:12.793194886Z" level=info msg="StopPodSandbox for \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\" returns successfully" Sep 9 00:10:12.793715 systemd[1]: run-netns-cni\x2d036b0fb7\x2d882f\x2d49ce\x2d4dc3\x2d8a338f7a3fa6.mount: Deactivated successfully. Sep 9 00:10:12.793851 systemd[1]: run-netns-cni\x2de4750239\x2d6640\x2d4fb6\x2dec8d\x2d017fd40012cc.mount: Deactivated successfully. Sep 9 00:10:12.797401 kubelet[2625]: E0909 00:10:12.797371 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:12.798171 containerd[1552]: time="2025-09-09T00:10:12.797733673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gjbqz,Uid:ed0dfce1-e4dc-4958-825b-d1a4c64907b2,Namespace:kube-system,Attempt:1,}" Sep 9 00:10:12.798171 containerd[1552]: time="2025-09-09T00:10:12.797930522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6ncd,Uid:fed5e32a-e444-4618-8bda-5d30e5270140,Namespace:calico-system,Attempt:1,}" Sep 9 00:10:12.876256 kubelet[2625]: E0909 00:10:12.875982 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:12.942436 systemd-networkd[1232]: cali607bf7883bf: Link UP Sep 9 00:10:12.944097 systemd-networkd[1232]: cali607bf7883bf: Gained carrier Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.860 [INFO][5001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s6ncd-eth0 csi-node-driver- calico-system fed5e32a-e444-4618-8bda-5d30e5270140 982 0 2025-09-09 00:09:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s6ncd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali607bf7883bf [] [] }} ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Namespace="calico-system" Pod="csi-node-driver-s6ncd" WorkloadEndpoint="localhost-k8s-csi--node--driver--s6ncd-" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.861 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Namespace="calico-system" Pod="csi-node-driver-s6ncd" WorkloadEndpoint="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.896 [INFO][5028] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" HandleID="k8s-pod-network.8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.897 [INFO][5028] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" HandleID="k8s-pod-network.8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s6ncd", "timestamp":"2025-09-09 00:10:12.896901893 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.897 [INFO][5028] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.897 [INFO][5028] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.897 [INFO][5028] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.906 [INFO][5028] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" host="localhost" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.910 [INFO][5028] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.914 [INFO][5028] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.915 [INFO][5028] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.918 [INFO][5028] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.918 [INFO][5028] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" host="localhost" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.920 [INFO][5028] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6 Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.926 [INFO][5028] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" host="localhost" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.931 [INFO][5028] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" host="localhost" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.931 [INFO][5028] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" host="localhost" Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.931 [INFO][5028] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:12.969556 containerd[1552]: 2025-09-09 00:10:12.931 [INFO][5028] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" HandleID="k8s-pod-network.8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:12.970096 containerd[1552]: 2025-09-09 00:10:12.936 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Namespace="calico-system" Pod="csi-node-driver-s6ncd" WorkloadEndpoint="localhost-k8s-csi--node--driver--s6ncd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s6ncd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fed5e32a-e444-4618-8bda-5d30e5270140", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s6ncd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali607bf7883bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:12.970096 containerd[1552]: 2025-09-09 00:10:12.936 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Namespace="calico-system" Pod="csi-node-driver-s6ncd" WorkloadEndpoint="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:12.970096 containerd[1552]: 2025-09-09 00:10:12.936 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali607bf7883bf ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Namespace="calico-system" Pod="csi-node-driver-s6ncd" WorkloadEndpoint="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:12.970096 containerd[1552]: 2025-09-09 00:10:12.944 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Namespace="calico-system" Pod="csi-node-driver-s6ncd" WorkloadEndpoint="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:12.970096 containerd[1552]: 2025-09-09 00:10:12.947 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Namespace="calico-system" Pod="csi-node-driver-s6ncd" WorkloadEndpoint="localhost-k8s-csi--node--driver--s6ncd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s6ncd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fed5e32a-e444-4618-8bda-5d30e5270140", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6", Pod:"csi-node-driver-s6ncd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali607bf7883bf", MAC:"da:ab:82:1d:87:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:12.970096 containerd[1552]: 2025-09-09 00:10:12.964 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6" Namespace="calico-system" Pod="csi-node-driver-s6ncd" WorkloadEndpoint="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:13.018546 containerd[1552]: time="2025-09-09T00:10:13.018448487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:10:13.018546 containerd[1552]: time="2025-09-09T00:10:13.018505090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:10:13.018546 containerd[1552]: time="2025-09-09T00:10:13.018520811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:13.018834 containerd[1552]: time="2025-09-09T00:10:13.018602735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:13.049959 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:10:13.053192 systemd-networkd[1232]: cali5338fee8784: Link UP Sep 9 00:10:13.054645 systemd-networkd[1232]: cali5338fee8784: Gained carrier Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:12.870 [INFO][5011] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0 coredns-7c65d6cfc9- kube-system ed0dfce1-e4dc-4958-825b-d1a4c64907b2 983 0 2025-09-09 00:09:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-gjbqz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5338fee8784 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjbqz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjbqz-" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:12.870 [INFO][5011] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjbqz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:12.906 [INFO][5034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" HandleID="k8s-pod-network.15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:12.906 [INFO][5034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" HandleID="k8s-pod-network.15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c32d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-gjbqz", "timestamp":"2025-09-09 00:10:12.906692061 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:12.906 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:12.931 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:12.931 [INFO][5034] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.007 [INFO][5034] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" host="localhost" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.015 [INFO][5034] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.023 [INFO][5034] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.025 [INFO][5034] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.029 [INFO][5034] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.029 [INFO][5034] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" host="localhost" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.034 [INFO][5034] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2 Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.038 [INFO][5034] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" host="localhost" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.045 [INFO][5034] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" host="localhost" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.046 [INFO][5034] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" host="localhost" Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.046 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:13.075494 containerd[1552]: 2025-09-09 00:10:13.046 [INFO][5034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" HandleID="k8s-pod-network.15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:13.076564 containerd[1552]: 2025-09-09 00:10:13.049 [INFO][5011] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjbqz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed0dfce1-e4dc-4958-825b-d1a4c64907b2", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-gjbqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5338fee8784", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:13.076564 containerd[1552]: 2025-09-09 00:10:13.049 [INFO][5011] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjbqz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:13.076564 containerd[1552]: 2025-09-09 00:10:13.049 [INFO][5011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5338fee8784 ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjbqz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:13.076564 containerd[1552]: 2025-09-09 00:10:13.053 [INFO][5011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjbqz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:13.076564 containerd[1552]: 2025-09-09 00:10:13.054 [INFO][5011] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjbqz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed0dfce1-e4dc-4958-825b-d1a4c64907b2", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2", Pod:"coredns-7c65d6cfc9-gjbqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5338fee8784", MAC:"46:17:91:db:4a:7a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:13.076564 containerd[1552]: 2025-09-09 00:10:13.068 [INFO][5011] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gjbqz" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:13.098573 containerd[1552]: time="2025-09-09T00:10:13.098399773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:10:13.098573 containerd[1552]: time="2025-09-09T00:10:13.098466776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:10:13.098573 containerd[1552]: time="2025-09-09T00:10:13.098481657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:13.098777 containerd[1552]: time="2025-09-09T00:10:13.098572061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:10:13.105934 containerd[1552]: time="2025-09-09T00:10:13.105889657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6ncd,Uid:fed5e32a-e444-4618-8bda-5d30e5270140,Namespace:calico-system,Attempt:1,} returns sandbox id \"8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6\"" Sep 9 00:10:13.112218 systemd-networkd[1232]: caliad094409d1e: Gained IPv6LL Sep 9 00:10:13.123704 systemd-resolved[1439]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:10:13.153442 containerd[1552]: time="2025-09-09T00:10:13.153405366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gjbqz,Uid:ed0dfce1-e4dc-4958-825b-d1a4c64907b2,Namespace:kube-system,Attempt:1,} returns sandbox id \"15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2\"" Sep 9 00:10:13.154490 kubelet[2625]: E0909 00:10:13.154461 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:13.157505 containerd[1552]: time="2025-09-09T00:10:13.157414801Z" level=info msg="CreateContainer within sandbox \"15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:10:13.173776 containerd[1552]: time="2025-09-09T00:10:13.173738074Z" level=info msg="CreateContainer within sandbox \"15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b6e2db29ce5a85859c7fc6fb32ecba923cb403355af474bc3051ceb31995da3a\"" Sep 9 00:10:13.175176 containerd[1552]: time="2025-09-09T00:10:13.174703641Z" level=info msg="StartContainer for \"b6e2db29ce5a85859c7fc6fb32ecba923cb403355af474bc3051ceb31995da3a\"" Sep 9 00:10:13.241824 containerd[1552]: time="2025-09-09T00:10:13.241526728Z" level=info msg="StartContainer for \"b6e2db29ce5a85859c7fc6fb32ecba923cb403355af474bc3051ceb31995da3a\" returns successfully" Sep 9 00:10:13.883948 kubelet[2625]: E0909 00:10:13.883914 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:13.898788 kubelet[2625]: I0909 00:10:13.896275 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gjbqz" podStartSLOduration=35.896257187 podStartE2EDuration="35.896257187s" podCreationTimestamp="2025-09-09 00:09:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:10:13.895889729 +0000 UTC m=+43.295321391" watchObservedRunningTime="2025-09-09 00:10:13.896257187 +0000 UTC m=+43.295688849" Sep 9 00:10:14.263271 systemd-networkd[1232]: cali607bf7883bf: Gained IPv6LL Sep 9 00:10:14.482097 containerd[1552]: time="2025-09-09T00:10:14.482042743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:14.483156 containerd[1552]: time="2025-09-09T00:10:14.482886744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 9 00:10:14.484933 containerd[1552]: time="2025-09-09T00:10:14.484853237Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:14.487326 containerd[1552]: time="2025-09-09T00:10:14.487289752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:14.488264 containerd[1552]: time="2025-09-09T00:10:14.488222557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 5.468915936s" Sep 9 00:10:14.488264 containerd[1552]: time="2025-09-09T00:10:14.488258078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 00:10:14.489459 containerd[1552]: time="2025-09-09T00:10:14.489432494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:10:14.491181 containerd[1552]: time="2025-09-09T00:10:14.491143455Z" level=info msg="CreateContainer within sandbox \"032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:10:14.504178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1540946393.mount: Deactivated successfully. Sep 9 00:10:14.507851 containerd[1552]: time="2025-09-09T00:10:14.507803126Z" level=info msg="CreateContainer within sandbox \"032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9365682c22f04f9ccc42434428033a6c3c98a66898c2bb1ce36acc351b25acbb\"" Sep 9 00:10:14.509947 containerd[1552]: time="2025-09-09T00:10:14.508702289Z" level=info msg="StartContainer for \"9365682c22f04f9ccc42434428033a6c3c98a66898c2bb1ce36acc351b25acbb\"" Sep 9 00:10:14.583303 systemd-networkd[1232]: cali5338fee8784: Gained IPv6LL Sep 9 00:10:14.663900 containerd[1552]: time="2025-09-09T00:10:14.663643041Z" level=info msg="StartContainer for \"9365682c22f04f9ccc42434428033a6c3c98a66898c2bb1ce36acc351b25acbb\" returns successfully" Sep 9 00:10:14.755866 containerd[1552]: time="2025-09-09T00:10:14.755308151Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:14.756210 containerd[1552]: time="2025-09-09T00:10:14.756172832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:10:14.759433 containerd[1552]: time="2025-09-09T00:10:14.759396225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 269.933289ms" Sep 9 00:10:14.759433 containerd[1552]: time="2025-09-09T00:10:14.759432306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 9 00:10:14.764129 containerd[1552]: time="2025-09-09T00:10:14.763200365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:10:14.765478 containerd[1552]: time="2025-09-09T00:10:14.765431991Z" level=info msg="CreateContainer within sandbox \"a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:10:14.779237 containerd[1552]: time="2025-09-09T00:10:14.779054397Z" level=info msg="CreateContainer within sandbox \"a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"10f09dd2d025eeabec7b7a33e26c687f1d1637c5be660801c99aca963a1b7743\"" Sep 9 00:10:14.779794 containerd[1552]: time="2025-09-09T00:10:14.779723509Z" level=info msg="StartContainer for \"10f09dd2d025eeabec7b7a33e26c687f1d1637c5be660801c99aca963a1b7743\"" Sep 9 00:10:14.866522 containerd[1552]: time="2025-09-09T00:10:14.866408423Z" level=info msg="StartContainer for \"10f09dd2d025eeabec7b7a33e26c687f1d1637c5be660801c99aca963a1b7743\" returns successfully" Sep 9 00:10:14.895897 kubelet[2625]: E0909 00:10:14.895523 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:14.904417 kubelet[2625]: I0909 00:10:14.904312 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d475789b5-phkbj" podStartSLOduration=23.429544735 podStartE2EDuration="28.90429554s" podCreationTimestamp="2025-09-09 00:09:46 +0000 UTC" firstStartedPulling="2025-09-09 00:10:09.014532482 +0000 UTC m=+38.413964144" lastFinishedPulling="2025-09-09 00:10:14.489283287 +0000 UTC m=+43.888714949" observedRunningTime="2025-09-09 00:10:14.902878273 +0000 UTC m=+44.302309935" watchObservedRunningTime="2025-09-09 00:10:14.90429554 +0000 UTC m=+44.303727202" Sep 9 00:10:14.934599 kubelet[2625]: I0909 00:10:14.934288 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d475789b5-j7mc9" podStartSLOduration=24.198639016 podStartE2EDuration="28.934271803s" podCreationTimestamp="2025-09-09 00:09:46 +0000 UTC" firstStartedPulling="2025-09-09 00:10:10.027136398 +0000 UTC m=+39.426568020" lastFinishedPulling="2025-09-09 00:10:14.762769145 +0000 UTC m=+44.162200807" observedRunningTime="2025-09-09 00:10:14.921816532 +0000 UTC m=+44.321248194" watchObservedRunningTime="2025-09-09 00:10:14.934271803 +0000 UTC m=+44.333703465" Sep 9 00:10:15.896590 kubelet[2625]: E0909 00:10:15.896552 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:15.897296 kubelet[2625]: I0909 00:10:15.896872 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:10:15.898282 kubelet[2625]: I0909 00:10:15.898248 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:10:16.937373 systemd[1]: Started sshd@7-10.0.0.10:22-10.0.0.1:45818.service - OpenSSH per-connection server daemon (10.0.0.1:45818). Sep 9 00:10:16.949125 containerd[1552]: time="2025-09-09T00:10:16.948807033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:16.951559 containerd[1552]: time="2025-09-09T00:10:16.950136093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 9 00:10:16.951559 containerd[1552]: time="2025-09-09T00:10:16.951205181Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:16.954024 containerd[1552]: time="2025-09-09T00:10:16.953658853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:16.954500 containerd[1552]: time="2025-09-09T00:10:16.954216678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.190974511s" Sep 9 00:10:16.954500 containerd[1552]: time="2025-09-09T00:10:16.954316363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 9 00:10:16.958363 containerd[1552]: time="2025-09-09T00:10:16.957929246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:10:16.972011 containerd[1552]: time="2025-09-09T00:10:16.971977644Z" level=info msg="CreateContainer within sandbox \"cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:10:16.994712 containerd[1552]: time="2025-09-09T00:10:16.994619671Z" level=info msg="CreateContainer within sandbox \"cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ad603192d8a2a66760f7bc2ea1de23712b32edf1a7ff05cd339d748d63b2b52d\"" Sep 9 00:10:16.995317 containerd[1552]: time="2025-09-09T00:10:16.995288061Z" level=info msg="StartContainer for \"ad603192d8a2a66760f7bc2ea1de23712b32edf1a7ff05cd339d748d63b2b52d\"" Sep 9 00:10:17.027398 sshd[5291]: Accepted publickey for core from 10.0.0.1 port 45818 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:17.029851 sshd[5291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:17.045722 systemd-logind[1530]: New session 8 of user core. Sep 9 00:10:17.052412 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:10:17.103848 containerd[1552]: time="2025-09-09T00:10:17.103795528Z" level=info msg="StartContainer for \"ad603192d8a2a66760f7bc2ea1de23712b32edf1a7ff05cd339d748d63b2b52d\" returns successfully" Sep 9 00:10:17.340233 sshd[5291]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:17.344032 systemd[1]: sshd@7-10.0.0.10:22-10.0.0.1:45818.service: Deactivated successfully. Sep 9 00:10:17.346297 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:10:17.346703 systemd-logind[1530]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:10:17.347703 systemd-logind[1530]: Removed session 8. Sep 9 00:10:17.913726 kubelet[2625]: I0909 00:10:17.913666 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d49f97799-ghr5z" podStartSLOduration=22.003298898 podStartE2EDuration="27.913649389s" podCreationTimestamp="2025-09-09 00:09:50 +0000 UTC" firstStartedPulling="2025-09-09 00:10:11.046882804 +0000 UTC m=+40.446314466" lastFinishedPulling="2025-09-09 00:10:16.957233295 +0000 UTC m=+46.356664957" observedRunningTime="2025-09-09 00:10:17.912600622 +0000 UTC m=+47.312032284" watchObservedRunningTime="2025-09-09 00:10:17.913649389 +0000 UTC m=+47.313081051" Sep 9 00:10:18.903850 kubelet[2625]: I0909 00:10:18.903805 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:10:19.116037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3327063997.mount: Deactivated successfully. Sep 9 00:10:19.584043 containerd[1552]: time="2025-09-09T00:10:19.583993596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:19.585274 containerd[1552]: time="2025-09-09T00:10:19.585114684Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 9 00:10:19.586049 containerd[1552]: time="2025-09-09T00:10:19.586019083Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:19.588210 containerd[1552]: time="2025-09-09T00:10:19.588184855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:19.589059 containerd[1552]: time="2025-09-09T00:10:19.589026051Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.631064484s" Sep 9 00:10:19.589136 containerd[1552]: time="2025-09-09T00:10:19.589059372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 9 00:10:19.590134 containerd[1552]: time="2025-09-09T00:10:19.590114898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:10:19.591510 containerd[1552]: time="2025-09-09T00:10:19.591483196Z" level=info msg="CreateContainer within sandbox \"edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:10:19.603801 containerd[1552]: time="2025-09-09T00:10:19.603753120Z" level=info msg="CreateContainer within sandbox \"edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"487bbab1021aab9f555aca83507678271b3bf90b9da36f6fc1a173155781062e\"" Sep 9 00:10:19.604493 containerd[1552]: time="2025-09-09T00:10:19.604315144Z" level=info msg="StartContainer for \"487bbab1021aab9f555aca83507678271b3bf90b9da36f6fc1a173155781062e\"" Sep 9 00:10:19.718917 containerd[1552]: time="2025-09-09T00:10:19.718864317Z" level=info msg="StartContainer for \"487bbab1021aab9f555aca83507678271b3bf90b9da36f6fc1a173155781062e\" returns successfully" Sep 9 00:10:20.832469 containerd[1552]: time="2025-09-09T00:10:20.832213947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:20.832945 containerd[1552]: time="2025-09-09T00:10:20.832915017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 9 00:10:20.835738 containerd[1552]: time="2025-09-09T00:10:20.835686853Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:20.837806 containerd[1552]: time="2025-09-09T00:10:20.837766700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:20.839063 containerd[1552]: time="2025-09-09T00:10:20.839028913Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.248886055s" Sep 9 00:10:20.839117 containerd[1552]: time="2025-09-09T00:10:20.839064595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 9 00:10:20.842513 containerd[1552]: time="2025-09-09T00:10:20.842466257Z" level=info msg="CreateContainer within sandbox \"8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:10:20.864707 containerd[1552]: time="2025-09-09T00:10:20.862214845Z" level=info msg="CreateContainer within sandbox \"8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"52facfbd46aeb4614210c3c16deea3bf1b22ccb7b5710edabb2ccd5631251c9b\"" Sep 9 00:10:20.865394 containerd[1552]: time="2025-09-09T00:10:20.865292614Z" level=info msg="StartContainer for \"52facfbd46aeb4614210c3c16deea3bf1b22ccb7b5710edabb2ccd5631251c9b\"" Sep 9 00:10:20.950551 containerd[1552]: time="2025-09-09T00:10:20.950375503Z" level=info msg="StartContainer for \"52facfbd46aeb4614210c3c16deea3bf1b22ccb7b5710edabb2ccd5631251c9b\" returns successfully" Sep 9 00:10:20.953186 containerd[1552]: time="2025-09-09T00:10:20.953149579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:10:22.269775 containerd[1552]: time="2025-09-09T00:10:22.269725243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:22.270684 containerd[1552]: time="2025-09-09T00:10:22.270640920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 9 00:10:22.271641 containerd[1552]: time="2025-09-09T00:10:22.271395670Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:22.273869 containerd[1552]: time="2025-09-09T00:10:22.273842930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:10:22.275166 containerd[1552]: time="2025-09-09T00:10:22.274706365Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.321517064s" Sep 9 00:10:22.275166 containerd[1552]: time="2025-09-09T00:10:22.274746966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 9 00:10:22.277781 containerd[1552]: time="2025-09-09T00:10:22.277745048Z" level=info msg="CreateContainer within sandbox \"8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:10:22.292097 containerd[1552]: time="2025-09-09T00:10:22.292052028Z" level=info msg="CreateContainer within sandbox \"8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"60a3ff3942a87d69c43fb082296a38fc27dcc6aa881c998be51a68c56df35f16\"" Sep 9 00:10:22.292502 containerd[1552]: time="2025-09-09T00:10:22.292476685Z" level=info msg="StartContainer for \"60a3ff3942a87d69c43fb082296a38fc27dcc6aa881c998be51a68c56df35f16\"" Sep 9 00:10:22.343686 containerd[1552]: time="2025-09-09T00:10:22.343552675Z" level=info msg="StartContainer for \"60a3ff3942a87d69c43fb082296a38fc27dcc6aa881c998be51a68c56df35f16\" returns successfully" Sep 9 00:10:22.348940 systemd[1]: Started sshd@8-10.0.0.10:22-10.0.0.1:53226.service - OpenSSH per-connection server daemon (10.0.0.1:53226). Sep 9 00:10:22.399125 sshd[5532]: Accepted publickey for core from 10.0.0.1 port 53226 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:22.405298 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:22.414671 systemd-logind[1530]: New session 9 of user core. Sep 9 00:10:22.425558 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:10:22.788639 sshd[5532]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:22.794478 systemd[1]: sshd@8-10.0.0.10:22-10.0.0.1:53226.service: Deactivated successfully. Sep 9 00:10:22.797663 systemd-logind[1530]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:10:22.797776 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:10:22.798298 kubelet[2625]: I0909 00:10:22.798138 2625 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:10:22.801712 systemd-logind[1530]: Removed session 9. Sep 9 00:10:22.805507 kubelet[2625]: I0909 00:10:22.805457 2625 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:10:22.940154 kubelet[2625]: I0909 00:10:22.940082 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s6ncd" podStartSLOduration=23.774656431 podStartE2EDuration="32.940064808s" podCreationTimestamp="2025-09-09 00:09:50 +0000 UTC" firstStartedPulling="2025-09-09 00:10:13.110002456 +0000 UTC m=+42.509434078" lastFinishedPulling="2025-09-09 00:10:22.275410793 +0000 UTC m=+51.674842455" observedRunningTime="2025-09-09 00:10:22.939902282 +0000 UTC m=+52.339333944" watchObservedRunningTime="2025-09-09 00:10:22.940064808 +0000 UTC m=+52.339496510" Sep 9 00:10:22.940325 kubelet[2625]: I0909 00:10:22.940272 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-bdppx" podStartSLOduration=25.392588802 podStartE2EDuration="32.940267616s" podCreationTimestamp="2025-09-09 00:09:50 +0000 UTC" firstStartedPulling="2025-09-09 00:10:12.042313838 +0000 UTC m=+41.441745460" lastFinishedPulling="2025-09-09 00:10:19.589992612 +0000 UTC m=+48.989424274" observedRunningTime="2025-09-09 00:10:19.921742863 +0000 UTC m=+49.321174565" watchObservedRunningTime="2025-09-09 00:10:22.940267616 +0000 UTC m=+52.339699238" Sep 9 00:10:27.135208 kubelet[2625]: I0909 00:10:27.135164 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:10:27.810683 systemd[1]: Started sshd@9-10.0.0.10:22-10.0.0.1:53240.service - OpenSSH per-connection server daemon (10.0.0.1:53240). Sep 9 00:10:27.865660 sshd[5601]: Accepted publickey for core from 10.0.0.1 port 53240 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:27.867718 sshd[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:27.872412 systemd-logind[1530]: New session 10 of user core. Sep 9 00:10:27.879009 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:10:28.016413 sshd[5601]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:28.022359 systemd[1]: Started sshd@10-10.0.0.10:22-10.0.0.1:53252.service - OpenSSH per-connection server daemon (10.0.0.1:53252). Sep 9 00:10:28.022753 systemd[1]: sshd@9-10.0.0.10:22-10.0.0.1:53240.service: Deactivated successfully. Sep 9 00:10:28.026848 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:10:28.027462 systemd-logind[1530]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:10:28.028602 systemd-logind[1530]: Removed session 10. Sep 9 00:10:28.057491 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 53252 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:28.058890 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:28.062593 systemd-logind[1530]: New session 11 of user core. Sep 9 00:10:28.067355 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:10:28.302549 sshd[5615]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:28.311123 systemd[1]: Started sshd@11-10.0.0.10:22-10.0.0.1:53264.service - OpenSSH per-connection server daemon (10.0.0.1:53264). Sep 9 00:10:28.315991 systemd[1]: sshd@10-10.0.0.10:22-10.0.0.1:53252.service: Deactivated successfully. Sep 9 00:10:28.317540 systemd-logind[1530]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:10:28.318215 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:10:28.319071 systemd-logind[1530]: Removed session 11. Sep 9 00:10:28.371914 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 53264 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:28.373272 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:28.377965 systemd-logind[1530]: New session 12 of user core. Sep 9 00:10:28.386394 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:10:28.591085 sshd[5629]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:28.594930 systemd[1]: sshd@11-10.0.0.10:22-10.0.0.1:53264.service: Deactivated successfully. Sep 9 00:10:28.598375 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:10:28.598721 systemd-logind[1530]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:10:28.601691 systemd-logind[1530]: Removed session 12. Sep 9 00:10:30.689455 containerd[1552]: time="2025-09-09T00:10:30.689275657Z" level=info msg="StopPodSandbox for \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\"" Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.742 [WARNING][5665] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s6ncd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fed5e32a-e444-4618-8bda-5d30e5270140", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6", Pod:"csi-node-driver-s6ncd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali607bf7883bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.742 [INFO][5665] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.742 [INFO][5665] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" iface="eth0" netns="" Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.742 [INFO][5665] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.742 [INFO][5665] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.774 [INFO][5674] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" HandleID="k8s-pod-network.a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.774 [INFO][5674] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.774 [INFO][5674] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.782 [WARNING][5674] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" HandleID="k8s-pod-network.a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.782 [INFO][5674] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" HandleID="k8s-pod-network.a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.784 [INFO][5674] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:30.787699 containerd[1552]: 2025-09-09 00:10:30.786 [INFO][5665] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:30.788279 containerd[1552]: time="2025-09-09T00:10:30.787741401Z" level=info msg="TearDown network for sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\" successfully" Sep 9 00:10:30.788279 containerd[1552]: time="2025-09-09T00:10:30.787767002Z" level=info msg="StopPodSandbox for \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\" returns successfully" Sep 9 00:10:30.789689 containerd[1552]: time="2025-09-09T00:10:30.788598392Z" level=info msg="RemovePodSandbox for \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\"" Sep 9 00:10:30.800935 containerd[1552]: time="2025-09-09T00:10:30.800870798Z" level=info msg="Forcibly stopping sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\"" Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.836 [WARNING][5691] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s6ncd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fed5e32a-e444-4618-8bda-5d30e5270140", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e2aab1595effe4e7c7a4916c95b3269d80cd396df259280d8a460fd7ebbccd6", Pod:"csi-node-driver-s6ncd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali607bf7883bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.836 [INFO][5691] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.837 [INFO][5691] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" iface="eth0" netns="" Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.837 [INFO][5691] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.837 [INFO][5691] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.866 [INFO][5701] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" HandleID="k8s-pod-network.a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.866 [INFO][5701] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.866 [INFO][5701] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.874 [WARNING][5701] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" HandleID="k8s-pod-network.a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.874 [INFO][5701] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" HandleID="k8s-pod-network.a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Workload="localhost-k8s-csi--node--driver--s6ncd-eth0" Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.875 [INFO][5701] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:30.878931 containerd[1552]: 2025-09-09 00:10:30.877 [INFO][5691] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6" Sep 9 00:10:30.879360 containerd[1552]: time="2025-09-09T00:10:30.878971321Z" level=info msg="TearDown network for sandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\" successfully" Sep 9 00:10:30.902933 containerd[1552]: time="2025-09-09T00:10:30.902724905Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:10:30.902933 containerd[1552]: time="2025-09-09T00:10:30.902814188Z" level=info msg="RemovePodSandbox \"a2df87306692a6babcbcdcef099f6154724e774bbe276c6e752813008606c9e6\" returns successfully" Sep 9 00:10:30.903452 containerd[1552]: time="2025-09-09T00:10:30.903426851Z" level=info msg="StopPodSandbox for \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\"" Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.935 [WARNING][5718] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--bdppx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"bad4beb6-5789-44ba-9fa9-719ce9756876", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54", Pod:"goldmane-7988f88666-bdppx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliad094409d1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.935 [INFO][5718] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.935 [INFO][5718] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" iface="eth0" netns="" Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.935 [INFO][5718] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.935 [INFO][5718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.954 [INFO][5727] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" HandleID="k8s-pod-network.ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.954 [INFO][5727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.954 [INFO][5727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.963 [WARNING][5727] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" HandleID="k8s-pod-network.ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.963 [INFO][5727] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" HandleID="k8s-pod-network.ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.964 [INFO][5727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:30.968043 containerd[1552]: 2025-09-09 00:10:30.966 [INFO][5718] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:30.968043 containerd[1552]: time="2025-09-09T00:10:30.968026001Z" level=info msg="TearDown network for sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\" successfully" Sep 9 00:10:30.968456 containerd[1552]: time="2025-09-09T00:10:30.968050202Z" level=info msg="StopPodSandbox for \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\" returns successfully" Sep 9 00:10:30.968456 containerd[1552]: time="2025-09-09T00:10:30.968395015Z" level=info msg="RemovePodSandbox for \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\"" Sep 9 00:10:30.968456 containerd[1552]: time="2025-09-09T00:10:30.968422336Z" level=info msg="Forcibly stopping sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\"" Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.001 [WARNING][5745] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--bdppx-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"bad4beb6-5789-44ba-9fa9-719ce9756876", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"edbc2c356c35cc71b782dd7861d957729c3e81ec32b521866410cb0ca272eb54", Pod:"goldmane-7988f88666-bdppx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliad094409d1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.001 [INFO][5745] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.001 [INFO][5745] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" iface="eth0" netns="" Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.001 [INFO][5745] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.001 [INFO][5745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.021 [INFO][5753] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" HandleID="k8s-pod-network.ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.022 [INFO][5753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.022 [INFO][5753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.030 [WARNING][5753] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" HandleID="k8s-pod-network.ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.030 [INFO][5753] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" HandleID="k8s-pod-network.ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Workload="localhost-k8s-goldmane--7988f88666--bdppx-eth0" Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.032 [INFO][5753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.035572 containerd[1552]: 2025-09-09 00:10:31.033 [INFO][5745] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6" Sep 9 00:10:31.035967 containerd[1552]: time="2025-09-09T00:10:31.035614408Z" level=info msg="TearDown network for sandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\" successfully" Sep 9 00:10:31.039091 containerd[1552]: time="2025-09-09T00:10:31.039047011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:10:31.039180 containerd[1552]: time="2025-09-09T00:10:31.039162616Z" level=info msg="RemovePodSandbox \"ff1ad705fcd47e39bfcb0dc69235e815c204ab4c5d619a07c9c3fe149d0ad7f6\" returns successfully" Sep 9 00:10:31.039690 containerd[1552]: time="2025-09-09T00:10:31.039650593Z" level=info msg="StopPodSandbox for \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\"" Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.075 [WARNING][5771] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"571f56a2-cb8d-4d68-9002-2ce4d310ff3f", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9", Pod:"coredns-7c65d6cfc9-8wxhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a3b6a5c171", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.075 [INFO][5771] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.075 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" iface="eth0" netns="" Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.075 [INFO][5771] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.075 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.100 [INFO][5779] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" HandleID="k8s-pod-network.00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.101 [INFO][5779] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.101 [INFO][5779] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.108 [WARNING][5779] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" HandleID="k8s-pod-network.00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.108 [INFO][5779] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" HandleID="k8s-pod-network.00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.109 [INFO][5779] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.113668 containerd[1552]: 2025-09-09 00:10:31.111 [INFO][5771] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:31.114165 containerd[1552]: time="2025-09-09T00:10:31.113718660Z" level=info msg="TearDown network for sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\" successfully" Sep 9 00:10:31.114165 containerd[1552]: time="2025-09-09T00:10:31.113747341Z" level=info msg="StopPodSandbox for \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\" returns successfully" Sep 9 00:10:31.114213 containerd[1552]: time="2025-09-09T00:10:31.114197637Z" level=info msg="RemovePodSandbox for \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\"" Sep 9 00:10:31.114249 containerd[1552]: time="2025-09-09T00:10:31.114234079Z" level=info msg="Forcibly stopping sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\"" Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.145 [WARNING][5796] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"571f56a2-cb8d-4d68-9002-2ce4d310ff3f", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"65825711f462cc4f23e20c40826e8f9129c4ac4cb46ec26e4c6318df0e3630c9", Pod:"coredns-7c65d6cfc9-8wxhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4a3b6a5c171", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.145 [INFO][5796] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.145 [INFO][5796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" iface="eth0" netns="" Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.145 [INFO][5796] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.145 [INFO][5796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.164 [INFO][5805] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" HandleID="k8s-pod-network.00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.164 [INFO][5805] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.165 [INFO][5805] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.174 [WARNING][5805] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" HandleID="k8s-pod-network.00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.174 [INFO][5805] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" HandleID="k8s-pod-network.00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Workload="localhost-k8s-coredns--7c65d6cfc9--8wxhk-eth0" Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.176 [INFO][5805] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.180057 containerd[1552]: 2025-09-09 00:10:31.178 [INFO][5796] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985" Sep 9 00:10:31.180467 containerd[1552]: time="2025-09-09T00:10:31.180098490Z" level=info msg="TearDown network for sandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\" successfully" Sep 9 00:10:31.186812 containerd[1552]: time="2025-09-09T00:10:31.186768371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:10:31.186889 containerd[1552]: time="2025-09-09T00:10:31.186830093Z" level=info msg="RemovePodSandbox \"00d1a6c0414dbe7883b626fc0385868af60aba5dbd8ccafa68008ec26757d985\" returns successfully" Sep 9 00:10:31.187288 containerd[1552]: time="2025-09-09T00:10:31.187252428Z" level=info msg="StopPodSandbox for \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\"" Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.227 [WARNING][5823] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0", GenerateName:"calico-kube-controllers-6d49f97799-", Namespace:"calico-system", SelfLink:"", UID:"a80d3a72-2041-413d-bbb6-13849b69be7e", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d49f97799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135", Pod:"calico-kube-controllers-6d49f97799-ghr5z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2cc3e9375a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.228 [INFO][5823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.228 [INFO][5823] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" iface="eth0" netns="" Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.228 [INFO][5823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.228 [INFO][5823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.244 [INFO][5831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" HandleID="k8s-pod-network.f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.244 [INFO][5831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.244 [INFO][5831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.253 [WARNING][5831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" HandleID="k8s-pod-network.f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.253 [INFO][5831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" HandleID="k8s-pod-network.f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.255 [INFO][5831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.259305 containerd[1552]: 2025-09-09 00:10:31.257 [INFO][5823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:31.259305 containerd[1552]: time="2025-09-09T00:10:31.259288262Z" level=info msg="TearDown network for sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\" successfully" Sep 9 00:10:31.259701 containerd[1552]: time="2025-09-09T00:10:31.259314423Z" level=info msg="StopPodSandbox for \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\" returns successfully" Sep 9 00:10:31.260179 containerd[1552]: time="2025-09-09T00:10:31.259891444Z" level=info msg="RemovePodSandbox for \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\"" Sep 9 00:10:31.260179 containerd[1552]: time="2025-09-09T00:10:31.259927485Z" level=info msg="Forcibly stopping sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\"" Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.292 [WARNING][5849] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0", GenerateName:"calico-kube-controllers-6d49f97799-", Namespace:"calico-system", SelfLink:"", UID:"a80d3a72-2041-413d-bbb6-13849b69be7e", ResourceVersion:"1148", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d49f97799", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd5073990651a7698695f66192e47f4f6a4d2514b79cc2475aa8a1f2c68a2135", Pod:"calico-kube-controllers-6d49f97799-ghr5z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2cc3e9375a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.292 [INFO][5849] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.292 [INFO][5849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" iface="eth0" netns="" Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.292 [INFO][5849] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.292 [INFO][5849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.313 [INFO][5858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" HandleID="k8s-pod-network.f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.313 [INFO][5858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.313 [INFO][5858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.323 [WARNING][5858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" HandleID="k8s-pod-network.f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.323 [INFO][5858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" HandleID="k8s-pod-network.f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Workload="localhost-k8s-calico--kube--controllers--6d49f97799--ghr5z-eth0" Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.324 [INFO][5858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.328004 containerd[1552]: 2025-09-09 00:10:31.326 [INFO][5849] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242" Sep 9 00:10:31.328451 containerd[1552]: time="2025-09-09T00:10:31.328054378Z" level=info msg="TearDown network for sandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\" successfully" Sep 9 00:10:31.331087 containerd[1552]: time="2025-09-09T00:10:31.331055606Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:10:31.331147 containerd[1552]: time="2025-09-09T00:10:31.331126729Z" level=info msg="RemovePodSandbox \"f97fbc6df8067dfc163d69dd4039c5a4d64055eab5016f02e827747e8bafd242\" returns successfully" Sep 9 00:10:31.331620 containerd[1552]: time="2025-09-09T00:10:31.331594786Z" level=info msg="StopPodSandbox for \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\"" Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.365 [WARNING][5876] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0", GenerateName:"calico-apiserver-d475789b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bf6427e-824b-4ca1-8c25-8463e096ff46", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d475789b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737", Pod:"calico-apiserver-d475789b5-j7mc9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8331e67938a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.365 [INFO][5876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.365 [INFO][5876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" iface="eth0" netns="" Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.365 [INFO][5876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.365 [INFO][5876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.384 [INFO][5885] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" HandleID="k8s-pod-network.95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.384 [INFO][5885] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.384 [INFO][5885] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.392 [WARNING][5885] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" HandleID="k8s-pod-network.95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.392 [INFO][5885] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" HandleID="k8s-pod-network.95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.393 [INFO][5885] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.397307 containerd[1552]: 2025-09-09 00:10:31.395 [INFO][5876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:31.397684 containerd[1552]: time="2025-09-09T00:10:31.397320192Z" level=info msg="TearDown network for sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\" successfully" Sep 9 00:10:31.397684 containerd[1552]: time="2025-09-09T00:10:31.397345193Z" level=info msg="StopPodSandbox for \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\" returns successfully" Sep 9 00:10:31.398176 containerd[1552]: time="2025-09-09T00:10:31.398150662Z" level=info msg="RemovePodSandbox for \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\"" Sep 9 00:10:31.398503 containerd[1552]: time="2025-09-09T00:10:31.398246906Z" level=info msg="Forcibly stopping sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\"" Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.430 [WARNING][5902] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0", GenerateName:"calico-apiserver-d475789b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"9bf6427e-824b-4ca1-8c25-8463e096ff46", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d475789b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a442434ea4b09ef0d0b027c608c1ab453b6fae32b3c6578086485a48f1cf2737", Pod:"calico-apiserver-d475789b5-j7mc9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8331e67938a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.431 [INFO][5902] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.431 [INFO][5902] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" iface="eth0" netns="" Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.431 [INFO][5902] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.431 [INFO][5902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.466 [INFO][5912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" HandleID="k8s-pod-network.95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.466 [INFO][5912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.466 [INFO][5912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.474 [WARNING][5912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" HandleID="k8s-pod-network.95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.474 [INFO][5912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" HandleID="k8s-pod-network.95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Workload="localhost-k8s-calico--apiserver--d475789b5--j7mc9-eth0" Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.475 [INFO][5912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.480133 containerd[1552]: 2025-09-09 00:10:31.477 [INFO][5902] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71" Sep 9 00:10:31.480133 containerd[1552]: time="2025-09-09T00:10:31.478943612Z" level=info msg="TearDown network for sandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\" successfully" Sep 9 00:10:31.482501 containerd[1552]: time="2025-09-09T00:10:31.482455378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:10:31.482660 containerd[1552]: time="2025-09-09T00:10:31.482642545Z" level=info msg="RemovePodSandbox \"95e454f19326b9fce3165a4302d2f7ca5cb351380ccebbefe31f82154e508d71\" returns successfully" Sep 9 00:10:31.483204 containerd[1552]: time="2025-09-09T00:10:31.483164884Z" level=info msg="StopPodSandbox for \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\"" Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.518 [WARNING][5929] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed0dfce1-e4dc-4958-825b-d1a4c64907b2", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2", Pod:"coredns-7c65d6cfc9-gjbqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5338fee8784", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.520 [INFO][5929] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.520 [INFO][5929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" iface="eth0" netns="" Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.520 [INFO][5929] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.520 [INFO][5929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.555 [INFO][5938] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" HandleID="k8s-pod-network.cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.555 [INFO][5938] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.555 [INFO][5938] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.578 [WARNING][5938] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" HandleID="k8s-pod-network.cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.578 [INFO][5938] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" HandleID="k8s-pod-network.cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.580 [INFO][5938] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.587382 containerd[1552]: 2025-09-09 00:10:31.585 [INFO][5929] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:31.587382 containerd[1552]: time="2025-09-09T00:10:31.587280033Z" level=info msg="TearDown network for sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\" successfully" Sep 9 00:10:31.587382 containerd[1552]: time="2025-09-09T00:10:31.587303833Z" level=info msg="StopPodSandbox for \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\" returns successfully" Sep 9 00:10:31.590307 containerd[1552]: time="2025-09-09T00:10:31.590275901Z" level=info msg="RemovePodSandbox for \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\"" Sep 9 00:10:31.590424 containerd[1552]: time="2025-09-09T00:10:31.590408985Z" level=info msg="Forcibly stopping sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\"" Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.629 [WARNING][5956] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed0dfce1-e4dc-4958-825b-d1a4c64907b2", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15ebecb9bb469a9a6dc7282e834b637767d97013b1f1c8e6a203cf4934d612c2", Pod:"coredns-7c65d6cfc9-gjbqz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5338fee8784", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.630 [INFO][5956] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.630 [INFO][5956] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" iface="eth0" netns="" Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.630 [INFO][5956] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.630 [INFO][5956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.652 [INFO][5965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" HandleID="k8s-pod-network.cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.652 [INFO][5965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.652 [INFO][5965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.661 [WARNING][5965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" HandleID="k8s-pod-network.cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.661 [INFO][5965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" HandleID="k8s-pod-network.cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Workload="localhost-k8s-coredns--7c65d6cfc9--gjbqz-eth0" Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.663 [INFO][5965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.666812 containerd[1552]: 2025-09-09 00:10:31.664 [INFO][5956] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e" Sep 9 00:10:31.667281 containerd[1552]: time="2025-09-09T00:10:31.666877579Z" level=info msg="TearDown network for sandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\" successfully" Sep 9 00:10:31.670042 containerd[1552]: time="2025-09-09T00:10:31.669989971Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:10:31.670136 containerd[1552]: time="2025-09-09T00:10:31.670070734Z" level=info msg="RemovePodSandbox \"cdd497bbbc5acb7304b83b28645dca3656ff521c9769013fec5ac091cf8ff60e\" returns successfully" Sep 9 00:10:31.670597 containerd[1552]: time="2025-09-09T00:10:31.670571432Z" level=info msg="StopPodSandbox for \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\"" Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.706 [WARNING][5982] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" WorkloadEndpoint="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.707 [INFO][5982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.707 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" iface="eth0" netns="" Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.707 [INFO][5982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.707 [INFO][5982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.724 [INFO][5991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" HandleID="k8s-pod-network.1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Workload="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.724 [INFO][5991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.724 [INFO][5991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.733 [WARNING][5991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" HandleID="k8s-pod-network.1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Workload="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.733 [INFO][5991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" HandleID="k8s-pod-network.1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Workload="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.734 [INFO][5991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.739919 containerd[1552]: 2025-09-09 00:10:31.736 [INFO][5982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:31.739919 containerd[1552]: time="2025-09-09T00:10:31.739837286Z" level=info msg="TearDown network for sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\" successfully" Sep 9 00:10:31.739919 containerd[1552]: time="2025-09-09T00:10:31.739862207Z" level=info msg="StopPodSandbox for \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\" returns successfully" Sep 9 00:10:31.741008 containerd[1552]: time="2025-09-09T00:10:31.740708997Z" level=info msg="RemovePodSandbox for \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\"" Sep 9 00:10:31.741008 containerd[1552]: time="2025-09-09T00:10:31.740746319Z" level=info msg="Forcibly stopping sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\"" Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.783 [WARNING][6009] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" WorkloadEndpoint="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.783 [INFO][6009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.783 [INFO][6009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" iface="eth0" netns="" Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.783 [INFO][6009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.783 [INFO][6009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.803 [INFO][6018] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" HandleID="k8s-pod-network.1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Workload="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.803 [INFO][6018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.803 [INFO][6018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.811 [WARNING][6018] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" HandleID="k8s-pod-network.1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Workload="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.811 [INFO][6018] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" HandleID="k8s-pod-network.1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Workload="localhost-k8s-whisker--75d7d5c5c4--2kg6j-eth0" Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.813 [INFO][6018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.818480 containerd[1552]: 2025-09-09 00:10:31.814 [INFO][6009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37" Sep 9 00:10:31.819443 containerd[1552]: time="2025-09-09T00:10:31.818629203Z" level=info msg="TearDown network for sandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\" successfully" Sep 9 00:10:31.821604 containerd[1552]: time="2025-09-09T00:10:31.821558789Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:10:31.821684 containerd[1552]: time="2025-09-09T00:10:31.821623591Z" level=info msg="RemovePodSandbox \"1d9dc91927313140ba35b7ef5fc96d10e7613a46221ed34461836d90f2d6fe37\" returns successfully" Sep 9 00:10:31.822385 containerd[1552]: time="2025-09-09T00:10:31.822071687Z" level=info msg="StopPodSandbox for \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\"" Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.854 [WARNING][6035] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0", GenerateName:"calico-apiserver-d475789b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d475789b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2", Pod:"calico-apiserver-d475789b5-phkbj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1c1a030a72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.854 [INFO][6035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.854 [INFO][6035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" iface="eth0" netns="" Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.854 [INFO][6035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.854 [INFO][6035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.874 [INFO][6043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" HandleID="k8s-pod-network.670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.874 [INFO][6043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.874 [INFO][6043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.883 [WARNING][6043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" HandleID="k8s-pod-network.670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.883 [INFO][6043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" HandleID="k8s-pod-network.670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.885 [INFO][6043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.888456 containerd[1552]: 2025-09-09 00:10:31.886 [INFO][6035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:31.888456 containerd[1552]: time="2025-09-09T00:10:31.888437037Z" level=info msg="TearDown network for sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\" successfully" Sep 9 00:10:31.889682 containerd[1552]: time="2025-09-09T00:10:31.888470078Z" level=info msg="StopPodSandbox for \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\" returns successfully" Sep 9 00:10:31.889682 containerd[1552]: time="2025-09-09T00:10:31.888981337Z" level=info msg="RemovePodSandbox for \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\"" Sep 9 00:10:31.889682 containerd[1552]: time="2025-09-09T00:10:31.889010578Z" level=info msg="Forcibly stopping sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\"" Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.922 [WARNING][6062] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0", GenerateName:"calico-apiserver-d475789b5-", Namespace:"calico-apiserver", SelfLink:"", UID:"bf8909d6-6dff-4be3-9cf1-098c6f9a0e2b", ResourceVersion:"1015", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 9, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d475789b5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"032338f261963c84e672092749e1089c8281e9d3b7b88f882de5c1b654cb61d2", Pod:"calico-apiserver-d475789b5-phkbj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia1c1a030a72", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.923 [INFO][6062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.923 [INFO][6062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" iface="eth0" netns="" Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.923 [INFO][6062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.923 [INFO][6062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.942 [INFO][6070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" HandleID="k8s-pod-network.670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.942 [INFO][6070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.942 [INFO][6070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.950 [WARNING][6070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" HandleID="k8s-pod-network.670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.950 [INFO][6070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" HandleID="k8s-pod-network.670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Workload="localhost-k8s-calico--apiserver--d475789b5--phkbj-eth0" Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.952 [INFO][6070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:10:31.956738 containerd[1552]: 2025-09-09 00:10:31.954 [INFO][6062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5" Sep 9 00:10:31.956738 containerd[1552]: time="2025-09-09T00:10:31.956631373Z" level=info msg="TearDown network for sandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\" successfully" Sep 9 00:10:31.961165 containerd[1552]: time="2025-09-09T00:10:31.961055452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:10:31.961333 containerd[1552]: time="2025-09-09T00:10:31.961260419Z" level=info msg="RemovePodSandbox \"670418cde08af1e905528b292ba57eabd7f95d4f983323a79dc59bf7d0b45dd5\" returns successfully" Sep 9 00:10:33.602459 systemd[1]: Started sshd@12-10.0.0.10:22-10.0.0.1:58092.service - OpenSSH per-connection server daemon (10.0.0.1:58092). Sep 9 00:10:33.648245 sshd[6079]: Accepted publickey for core from 10.0.0.1 port 58092 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:33.650000 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:33.654468 systemd-logind[1530]: New session 13 of user core. Sep 9 00:10:33.661405 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:10:33.866870 sshd[6079]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:33.870351 systemd-logind[1530]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:10:33.870883 systemd[1]: sshd@12-10.0.0.10:22-10.0.0.1:58092.service: Deactivated successfully. Sep 9 00:10:33.873257 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:10:33.874854 systemd-logind[1530]: Removed session 13. Sep 9 00:10:38.358516 systemd[1]: run-containerd-runc-k8s.io-c2084af97f9e6dc60f08a198287e75f79fcc7bb706c17ed1bca0d107103022e8-runc.Pkc7pE.mount: Deactivated successfully. Sep 9 00:10:38.883517 systemd[1]: Started sshd@13-10.0.0.10:22-10.0.0.1:58100.service - OpenSSH per-connection server daemon (10.0.0.1:58100). Sep 9 00:10:38.919798 sshd[6136]: Accepted publickey for core from 10.0.0.1 port 58100 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:38.921312 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:38.925013 systemd-logind[1530]: New session 14 of user core. Sep 9 00:10:38.932349 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:10:39.100981 sshd[6136]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:39.104801 systemd[1]: sshd@13-10.0.0.10:22-10.0.0.1:58100.service: Deactivated successfully. Sep 9 00:10:39.106947 systemd-logind[1530]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:10:39.106954 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:10:39.108720 systemd-logind[1530]: Removed session 14. Sep 9 00:10:44.106344 systemd[1]: Started sshd@14-10.0.0.10:22-10.0.0.1:44372.service - OpenSSH per-connection server daemon (10.0.0.1:44372). Sep 9 00:10:44.146288 sshd[6173]: Accepted publickey for core from 10.0.0.1 port 44372 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:44.147612 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:44.153941 systemd-logind[1530]: New session 15 of user core. Sep 9 00:10:44.164399 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:10:44.287849 sshd[6173]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:44.291362 systemd[1]: sshd@14-10.0.0.10:22-10.0.0.1:44372.service: Deactivated successfully. Sep 9 00:10:44.293685 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:10:44.293687 systemd-logind[1530]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:10:44.295097 systemd-logind[1530]: Removed session 15. Sep 9 00:10:46.193710 kubelet[2625]: I0909 00:10:46.193525 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:10:49.298362 systemd[1]: Started sshd@15-10.0.0.10:22-10.0.0.1:44388.service - OpenSSH per-connection server daemon (10.0.0.1:44388). Sep 9 00:10:49.333082 sshd[6196]: Accepted publickey for core from 10.0.0.1 port 44388 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:49.336860 sshd[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:49.341182 systemd-logind[1530]: New session 16 of user core. Sep 9 00:10:49.347576 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:10:49.539761 sshd[6196]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:49.555430 systemd[1]: Started sshd@16-10.0.0.10:22-10.0.0.1:44394.service - OpenSSH per-connection server daemon (10.0.0.1:44394). Sep 9 00:10:49.555848 systemd[1]: sshd@15-10.0.0.10:22-10.0.0.1:44388.service: Deactivated successfully. Sep 9 00:10:49.561214 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:10:49.564459 systemd-logind[1530]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:10:49.568520 systemd-logind[1530]: Removed session 16. Sep 9 00:10:49.598373 sshd[6209]: Accepted publickey for core from 10.0.0.1 port 44394 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:49.599854 sshd[6209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:49.605190 systemd-logind[1530]: New session 17 of user core. Sep 9 00:10:49.614489 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:10:49.931173 sshd[6209]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:49.937436 systemd[1]: Started sshd@17-10.0.0.10:22-10.0.0.1:52836.service - OpenSSH per-connection server daemon (10.0.0.1:52836). Sep 9 00:10:49.937838 systemd[1]: sshd@16-10.0.0.10:22-10.0.0.1:44394.service: Deactivated successfully. Sep 9 00:10:49.941391 systemd-logind[1530]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:10:49.941421 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:10:49.942909 systemd-logind[1530]: Removed session 17. Sep 9 00:10:49.978738 sshd[6222]: Accepted publickey for core from 10.0.0.1 port 52836 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:49.980133 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:49.984583 systemd-logind[1530]: New session 18 of user core. Sep 9 00:10:49.997484 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:10:51.404774 kubelet[2625]: I0909 00:10:51.404720 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:10:51.909551 sshd[6222]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:51.917360 systemd[1]: Started sshd@18-10.0.0.10:22-10.0.0.1:52850.service - OpenSSH per-connection server daemon (10.0.0.1:52850). Sep 9 00:10:51.917772 systemd[1]: sshd@17-10.0.0.10:22-10.0.0.1:52836.service: Deactivated successfully. Sep 9 00:10:51.920948 systemd-logind[1530]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:10:51.921171 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:10:51.923009 systemd-logind[1530]: Removed session 18. Sep 9 00:10:51.984201 sshd[6245]: Accepted publickey for core from 10.0.0.1 port 52850 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:51.986200 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:51.990549 systemd-logind[1530]: New session 19 of user core. Sep 9 00:10:51.998467 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:10:52.502838 sshd[6245]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:52.511365 systemd[1]: Started sshd@19-10.0.0.10:22-10.0.0.1:52862.service - OpenSSH per-connection server daemon (10.0.0.1:52862). Sep 9 00:10:52.511782 systemd[1]: sshd@18-10.0.0.10:22-10.0.0.1:52850.service: Deactivated successfully. Sep 9 00:10:52.516458 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:10:52.517460 systemd-logind[1530]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:10:52.519925 systemd-logind[1530]: Removed session 19. Sep 9 00:10:52.551053 sshd[6261]: Accepted publickey for core from 10.0.0.1 port 52862 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:52.552522 sshd[6261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:52.557014 systemd-logind[1530]: New session 20 of user core. Sep 9 00:10:52.569479 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:10:52.704217 sshd[6261]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:52.707565 systemd[1]: sshd@19-10.0.0.10:22-10.0.0.1:52862.service: Deactivated successfully. Sep 9 00:10:52.709726 systemd-logind[1530]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:10:52.709727 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:10:52.713550 systemd-logind[1530]: Removed session 20. Sep 9 00:10:57.689969 kubelet[2625]: E0909 00:10:57.689808 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:57.689969 kubelet[2625]: E0909 00:10:57.689903 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:10:57.714391 systemd[1]: Started sshd@20-10.0.0.10:22-10.0.0.1:52878.service - OpenSSH per-connection server daemon (10.0.0.1:52878). Sep 9 00:10:57.747570 sshd[6305]: Accepted publickey for core from 10.0.0.1 port 52878 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:10:57.748811 sshd[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:10:57.752290 systemd-logind[1530]: New session 21 of user core. Sep 9 00:10:57.759352 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:10:57.880549 sshd[6305]: pam_unix(sshd:session): session closed for user core Sep 9 00:10:57.883641 systemd[1]: sshd@20-10.0.0.10:22-10.0.0.1:52878.service: Deactivated successfully. Sep 9 00:10:57.886088 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:10:57.887474 systemd-logind[1530]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:10:57.888472 systemd-logind[1530]: Removed session 21. Sep 9 00:10:58.690541 kubelet[2625]: E0909 00:10:58.690497 2625 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:11:02.892371 systemd[1]: Started sshd@21-10.0.0.10:22-10.0.0.1:35686.service - OpenSSH per-connection server daemon (10.0.0.1:35686). Sep 9 00:11:02.931758 sshd[6321]: Accepted publickey for core from 10.0.0.1 port 35686 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:11:02.933129 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:11:02.938749 systemd-logind[1530]: New session 22 of user core. Sep 9 00:11:02.948092 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:11:03.121494 sshd[6321]: pam_unix(sshd:session): session closed for user core Sep 9 00:11:03.125078 systemd[1]: sshd@21-10.0.0.10:22-10.0.0.1:35686.service: Deactivated successfully. Sep 9 00:11:03.128246 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:11:03.128279 systemd-logind[1530]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:11:03.129664 systemd-logind[1530]: Removed session 22. Sep 9 00:11:08.133378 systemd[1]: Started sshd@22-10.0.0.10:22-10.0.0.1:35690.service - OpenSSH per-connection server daemon (10.0.0.1:35690). Sep 9 00:11:08.171723 sshd[6337]: Accepted publickey for core from 10.0.0.1 port 35690 ssh2: RSA SHA256:h2hdqj5up/hBRHZQ3StgDpJiWnWjl57ZEr1UTjCMf5k Sep 9 00:11:08.175703 sshd[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:11:08.181767 systemd-logind[1530]: New session 23 of user core. Sep 9 00:11:08.189428 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:11:08.422172 sshd[6337]: pam_unix(sshd:session): session closed for user core Sep 9 00:11:08.430777 systemd[1]: sshd@22-10.0.0.10:22-10.0.0.1:35690.service: Deactivated successfully. Sep 9 00:11:08.434203 systemd-logind[1530]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:11:08.434216 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:11:08.436888 systemd-logind[1530]: Removed session 23.